00:00:00.001 Started by upstream project "autotest-nightly" build number 3911 00:00:00.001 originally caused by: 00:00:00.001 Started by user Latecki, Karol 00:00:00.002 Started by upstream project "autotest-nightly" build number 3909 00:00:00.002 originally caused by: 00:00:00.002 Started by user Latecki, Karol 00:00:00.003 Started by upstream project "autotest-nightly" build number 3908 00:00:00.003 originally caused by: 00:00:00.003 Started by user Latecki, Karol 00:00:00.116 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.117 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.191 Fetching changes from the remote Git repository 00:00:00.192 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.266 Using shallow fetch with depth 1 00:00:00.266 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.266 > git --version # timeout=10 00:00:00.367 > git --version # 'git version 2.39.2' 00:00:00.367 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.389 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.389 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:07.223 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.234 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.244 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:07.244 > git config core.sparsecheckout # timeout=10 00:00:07.254 > git read-tree -mu HEAD # timeout=10 00:00:07.269 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:07.290 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:07.290 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:07.396 [Pipeline] Start of Pipeline 00:00:07.407 [Pipeline] library 00:00:07.408 Loading library shm_lib@master 00:00:07.408 Library shm_lib@master is cached. Copying from home. 00:00:07.424 [Pipeline] node 00:00:07.433 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.435 [Pipeline] { 00:00:07.444 [Pipeline] catchError 00:00:07.446 [Pipeline] { 00:00:07.454 [Pipeline] wrap 00:00:07.461 [Pipeline] { 00:00:07.470 [Pipeline] stage 00:00:07.471 [Pipeline] { (Prologue) 00:00:07.724 [Pipeline] sh 00:00:08.019 + logger -p user.info -t JENKINS-CI 00:00:08.036 [Pipeline] echo 00:00:08.037 Node: CYP9 00:00:08.044 [Pipeline] sh 00:00:08.350 [Pipeline] setCustomBuildProperty 00:00:08.364 [Pipeline] echo 00:00:08.366 Cleanup processes 00:00:08.371 [Pipeline] sh 00:00:08.666 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.666 2538501 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.683 [Pipeline] sh 00:00:08.983 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.983 ++ grep -v 'sudo pgrep' 00:00:08.983 ++ awk '{print $1}' 00:00:08.983 + sudo kill -9 00:00:08.983 + true 00:00:09.020 [Pipeline] cleanWs 00:00:09.030 [WS-CLEANUP] Deleting project workspace... 00:00:09.030 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.037 [WS-CLEANUP] done 00:00:09.041 [Pipeline] setCustomBuildProperty 00:00:09.057 [Pipeline] sh 00:00:09.340 + sudo git config --global --replace-all safe.directory '*' 00:00:09.444 [Pipeline] httpRequest 00:00:09.479 [Pipeline] echo 00:00:09.480 Sorcerer 10.211.164.101 is alive 00:00:09.486 [Pipeline] httpRequest 00:00:09.495 HttpMethod: GET 00:00:09.495 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:09.497 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:09.519 Response Code: HTTP/1.1 200 OK 00:00:09.520 Success: Status code 200 is in the accepted range: 200,404 00:00:09.520 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:12.445 [Pipeline] sh 00:00:12.735 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:12.752 [Pipeline] httpRequest 00:00:12.780 [Pipeline] echo 00:00:12.782 Sorcerer 10.211.164.101 is alive 00:00:12.792 [Pipeline] httpRequest 00:00:12.797 HttpMethod: GET 00:00:12.798 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:12.799 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:12.814 Response Code: HTTP/1.1 200 OK 00:00:12.814 Success: Status code 200 is in the accepted range: 200,404 00:00:12.815 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:17.909 [Pipeline] sh 00:01:18.204 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:20.763 [Pipeline] sh 00:01:21.053 + git -C spdk log --oneline -n5 00:01:21.053 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:21.053 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:21.053 3731556bd lvol: declare g_lvol_if static 00:01:21.053 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:21.053 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:21.065 [Pipeline] } 00:01:21.084 [Pipeline] // stage 00:01:21.093 [Pipeline] stage 00:01:21.096 [Pipeline] { (Prepare) 00:01:21.115 [Pipeline] writeFile 00:01:21.133 [Pipeline] sh 00:01:21.421 + logger -p user.info -t JENKINS-CI 00:01:21.434 [Pipeline] sh 00:01:21.722 + logger -p user.info -t JENKINS-CI 00:01:21.736 [Pipeline] sh 00:01:22.024 + cat autorun-spdk.conf 00:01:22.024 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.024 SPDK_TEST_NVMF=1 00:01:22.024 SPDK_TEST_NVME_CLI=1 00:01:22.024 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.024 SPDK_TEST_NVMF_NICS=e810 00:01:22.024 SPDK_RUN_ASAN=1 00:01:22.024 SPDK_RUN_UBSAN=1 00:01:22.024 NET_TYPE=phy 00:01:22.033 RUN_NIGHTLY=1 00:01:22.037 [Pipeline] readFile 00:01:22.062 [Pipeline] withEnv 00:01:22.064 [Pipeline] { 00:01:22.078 [Pipeline] sh 00:01:22.367 + set -ex 00:01:22.367 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:22.367 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.367 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.367 ++ SPDK_TEST_NVMF=1 00:01:22.367 ++ SPDK_TEST_NVME_CLI=1 00:01:22.367 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.367 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.367 ++ SPDK_RUN_ASAN=1 00:01:22.367 ++ SPDK_RUN_UBSAN=1 00:01:22.367 ++ NET_TYPE=phy 00:01:22.367 ++ RUN_NIGHTLY=1 00:01:22.367 + case $SPDK_TEST_NVMF_NICS in 00:01:22.367 + DRIVERS=ice 00:01:22.367 + [[ tcp == \r\d\m\a ]] 00:01:22.367 + [[ -n ice ]] 00:01:22.368 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:22.368 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:22.368 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:22.368 rmmod: ERROR: Module irdma is not currently loaded 00:01:22.368 rmmod: ERROR: Module i40iw is not currently loaded 00:01:22.368 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:22.368 + true 00:01:22.368 + for D in $DRIVERS 00:01:22.368 + sudo modprobe ice 00:01:22.368 + exit 0 00:01:22.378 [Pipeline] } 00:01:22.397 [Pipeline] // withEnv 00:01:22.403 [Pipeline] } 00:01:22.422 [Pipeline] // stage 00:01:22.431 [Pipeline] catchError 00:01:22.433 [Pipeline] { 00:01:22.449 [Pipeline] timeout 00:01:22.449 Timeout set to expire in 50 min 00:01:22.451 [Pipeline] { 00:01:22.467 [Pipeline] stage 00:01:22.469 [Pipeline] { (Tests) 00:01:22.485 [Pipeline] sh 00:01:22.774 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.774 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.774 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.774 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:22.774 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.774 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:22.774 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:22.774 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:22.774 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:22.774 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:22.774 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:22.774 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.774 + source /etc/os-release 00:01:22.774 ++ NAME='Fedora Linux' 00:01:22.774 ++ VERSION='38 (Cloud Edition)' 00:01:22.774 ++ ID=fedora 00:01:22.774 ++ VERSION_ID=38 00:01:22.774 ++ VERSION_CODENAME= 00:01:22.774 ++ PLATFORM_ID=platform:f38 00:01:22.774 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:22.774 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:22.774 ++ LOGO=fedora-logo-icon 00:01:22.774 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:22.774 ++ HOME_URL=https://fedoraproject.org/ 00:01:22.774 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:22.774 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:22.774 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:22.774 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:22.774 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:22.774 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:22.774 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:22.774 ++ SUPPORT_END=2024-05-14 00:01:22.774 ++ VARIANT='Cloud Edition' 00:01:22.774 ++ VARIANT_ID=cloud 00:01:22.774 + uname -a 00:01:22.774 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:22.774 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:26.076 Hugepages 00:01:26.076 node hugesize free / total 00:01:26.076 node0 1048576kB 0 / 0 00:01:26.076 node0 2048kB 0 / 0 00:01:26.076 node1 1048576kB 0 / 0 00:01:26.076 node1 2048kB 0 / 0 00:01:26.076 00:01:26.076 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.076 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:26.076 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:26.076 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:26.076 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:26.076 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:26.076 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:26.076 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:26.076 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:26.076 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:26.076 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:26.076 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:26.076 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:26.076 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:26.076 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:26.076 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:26.076 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:26.076 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:26.076 + rm -f /tmp/spdk-ld-path 00:01:26.076 + source autorun-spdk.conf 00:01:26.076 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.076 ++ SPDK_TEST_NVMF=1 00:01:26.076 ++ SPDK_TEST_NVME_CLI=1 00:01:26.076 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.076 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.076 ++ SPDK_RUN_ASAN=1 00:01:26.076 ++ SPDK_RUN_UBSAN=1 00:01:26.076 ++ NET_TYPE=phy 00:01:26.076 ++ RUN_NIGHTLY=1 00:01:26.076 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.076 + [[ -n '' ]] 00:01:26.076 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.076 + for M in /var/spdk/build-*-manifest.txt 00:01:26.076 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.076 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:26.076 + for M in /var/spdk/build-*-manifest.txt 00:01:26.076 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.076 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:26.076 ++ uname 00:01:26.076 + [[ Linux == \L\i\n\u\x ]] 00:01:26.076 + sudo dmesg -T 00:01:26.076 + sudo dmesg --clear 00:01:26.076 + dmesg_pid=2539474 00:01:26.076 + [[ Fedora Linux == FreeBSD ]] 00:01:26.076 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.076 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.076 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.076 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:26.076 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:26.076 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.076 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.076 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.076 + sudo dmesg -Tw 00:01:26.076 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.076 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.076 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.076 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.076 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.076 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.076 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.076 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.076 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:26.076 Test configuration: 00:01:26.076 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.076 SPDK_TEST_NVMF=1 00:01:26.076 SPDK_TEST_NVME_CLI=1 00:01:26.076 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.076 SPDK_TEST_NVMF_NICS=e810 00:01:26.076 SPDK_RUN_ASAN=1 00:01:26.076 SPDK_RUN_UBSAN=1 00:01:26.076 NET_TYPE=phy 00:01:26.076 RUN_NIGHTLY=1 19:05:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:26.076 19:05:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.076 19:05:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.076 19:05:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.076 19:05:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.076 19:05:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.077 19:05:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.077 19:05:44 -- paths/export.sh@5 -- $ export PATH 00:01:26.077 19:05:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.077 19:05:44 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:26.077 19:05:44 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:26.077 19:05:44 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721667944.XXXXXX 00:01:26.077 19:05:44 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721667944.USBIU9 00:01:26.077 19:05:44 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:26.077 19:05:44 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:26.077 19:05:44 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:26.077 19:05:44 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:26.077 19:05:44 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:26.077 19:05:44 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:26.077 19:05:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:26.077 19:05:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.077 19:05:44 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:26.077 19:05:44 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:26.077 19:05:44 -- pm/common@17 -- $ local monitor 00:01:26.077 19:05:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.077 19:05:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.077 19:05:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.077 19:05:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.077 19:05:44 -- pm/common@21 -- $ date +%s 00:01:26.077 19:05:44 -- pm/common@25 -- $ sleep 1 00:01:26.077 19:05:44 -- pm/common@21 -- $ date +%s 00:01:26.077 19:05:44 -- pm/common@21 -- $ date +%s 00:01:26.077 19:05:44 -- pm/common@21 -- $ date +%s 00:01:26.077 19:05:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721667944 00:01:26.077 19:05:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721667944 00:01:26.077 19:05:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721667944 00:01:26.077 19:05:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721667944 00:01:26.077 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721667944_collect-vmstat.pm.log 00:01:26.077 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721667944_collect-cpu-load.pm.log 00:01:26.077 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721667944_collect-cpu-temp.pm.log 00:01:26.077 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721667944_collect-bmc-pm.bmc.pm.log 00:01:27.056 19:05:45 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:27.056 19:05:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.056 19:05:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.056 19:05:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.056 19:05:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.056 Mon Jul 22 05:05:45 PM UTC 2024 00:01:27.056 19:05:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.056 v24.09-pre-297-gf7b31b2b9 00:01:27.056 19:05:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:27.056 19:05:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:27.056 19:05:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:27.056 19:05:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:27.056 19:05:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.056 ************************************ 00:01:27.056 START TEST asan 00:01:27.056 ************************************ 00:01:27.056 19:05:45 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:27.056 using asan 00:01:27.056 00:01:27.056 real 0m0.001s 00:01:27.056 user 0m0.001s 00:01:27.056 sys 0m0.000s 00:01:27.056 19:05:45 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:27.056 19:05:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.056 ************************************ 00:01:27.056 END TEST asan 00:01:27.056 ************************************ 00:01:27.056 19:05:45 -- common/autotest_common.sh@1142 -- $ return 0 00:01:27.056 19:05:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.056 19:05:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.056 19:05:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:27.056 19:05:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:27.056 19:05:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.056 ************************************ 00:01:27.056 START TEST ubsan 00:01:27.056 ************************************ 00:01:27.056 19:05:45 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:27.056 using ubsan 00:01:27.056 00:01:27.056 real 0m0.000s 00:01:27.056 user 0m0.000s 00:01:27.056 sys 0m0.000s 00:01:27.056 19:05:45 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:27.056 19:05:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.056 ************************************ 00:01:27.056 END TEST ubsan 00:01:27.056 ************************************ 00:01:27.056 19:05:45 -- common/autotest_common.sh@1142 -- $ return 0 00:01:27.056 19:05:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.056 19:05:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.056 19:05:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.056 19:05:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.056 19:05:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.056 19:05:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.056 19:05:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.056 19:05:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.056 19:05:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:27.317 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:27.317 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:27.578 Using 'verbs' RDMA provider 00:01:43.436 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:55.673 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:55.673 Creating mk/config.mk...done. 00:01:55.673 Creating mk/cc.flags.mk...done. 00:01:55.673 Type 'make' to build. 00:01:55.673 19:06:14 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:55.673 19:06:14 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:55.673 19:06:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:55.673 19:06:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.673 ************************************ 00:01:55.673 START TEST make 00:01:55.673 ************************************ 00:01:55.673 19:06:14 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:55.673 make[1]: Nothing to be done for 'all'. 00:02:05.669 The Meson build system 00:02:05.669 Version: 1.3.1 00:02:05.669 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:05.669 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:05.669 Build type: native build 00:02:05.669 Program cat found: YES (/usr/bin/cat) 00:02:05.669 Project name: DPDK 00:02:05.669 Project version: 24.03.0 00:02:05.669 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:05.669 C linker for the host machine: cc ld.bfd 2.39-16 00:02:05.669 Host machine cpu family: x86_64 00:02:05.669 Host machine cpu: x86_64 00:02:05.669 Message: ## Building in Developer Mode ## 00:02:05.669 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.669 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.669 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.669 Program python3 found: YES (/usr/bin/python3) 00:02:05.669 Program cat found: YES (/usr/bin/cat) 00:02:05.669 Compiler for C supports arguments -march=native: YES 00:02:05.669 Checking for size of "void *" : 8 00:02:05.669 Checking for size of "void *" : 8 (cached) 00:02:05.669 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:05.669 Library m found: YES 00:02:05.669 Library numa found: YES 00:02:05.669 Has header "numaif.h" : YES 00:02:05.669 Library fdt found: NO 00:02:05.669 Library execinfo found: NO 00:02:05.669 Has header "execinfo.h" : YES 00:02:05.669 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:05.669 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.669 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.669 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.669 Run-time dependency openssl found: YES 3.0.9 00:02:05.669 Run-time dependency libpcap found: YES 1.10.4 00:02:05.669 Has header "pcap.h" with dependency libpcap: YES 00:02:05.669 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.669 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.669 Compiler for C supports arguments -Wformat: YES 00:02:05.669 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.669 Compiler for C supports arguments -Wformat-security: NO 00:02:05.669 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.669 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.669 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.669 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.669 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.669 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.669 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.669 Compiler for C supports arguments -Wundef: YES 00:02:05.669 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.669 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.669 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.669 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.669 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.669 Program objdump found: YES (/usr/bin/objdump) 00:02:05.670 Compiler for C supports arguments -mavx512f: YES 00:02:05.670 Checking if "AVX512 checking" compiles: YES 00:02:05.670 Fetching value of define "__SSE4_2__" : 1 00:02:05.670 Fetching value of define "__AES__" : 1 00:02:05.670 Fetching value of define "__AVX__" : 1 00:02:05.670 Fetching value of define "__AVX2__" : 1 00:02:05.670 Fetching value of define "__AVX512BW__" : 1 00:02:05.670 Fetching value of define "__AVX512CD__" : 1 00:02:05.670 Fetching value of define "__AVX512DQ__" : 1 00:02:05.670 Fetching value of define "__AVX512F__" : 1 00:02:05.670 Fetching value of define "__AVX512VL__" : 1 00:02:05.670 Fetching value of define "__PCLMUL__" : 1 00:02:05.670 Fetching value of define "__RDRND__" : 1 00:02:05.670 Fetching value of define "__RDSEED__" : 1 00:02:05.670 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:05.670 Fetching value of define "__znver1__" : (undefined) 00:02:05.670 Fetching value of define "__znver2__" : (undefined) 00:02:05.670 Fetching value of define "__znver3__" : (undefined) 00:02:05.670 Fetching value of define "__znver4__" : (undefined) 00:02:05.670 Library asan found: YES 00:02:05.670 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.670 Message: lib/log: Defining dependency "log" 00:02:05.670 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.670 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.670 Library rt found: YES 00:02:05.670 Checking for function "getentropy" : NO 00:02:05.670 Message: lib/eal: Defining dependency "eal" 00:02:05.670 Message: lib/ring: Defining dependency "ring" 00:02:05.670 Message: lib/rcu: Defining dependency "rcu" 00:02:05.670 Message: lib/mempool: Defining dependency "mempool" 00:02:05.670 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.670 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.670 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.670 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.670 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.670 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.670 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:05.670 Compiler for C supports arguments -mpclmul: YES 00:02:05.670 Compiler for C supports arguments -maes: YES 00:02:05.670 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.670 Compiler for C supports arguments -mavx512bw: YES 00:02:05.670 Compiler for C supports arguments -mavx512dq: YES 00:02:05.670 Compiler for C supports arguments -mavx512vl: YES 00:02:05.670 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.670 Compiler for C supports arguments -mavx2: YES 00:02:05.670 Compiler for C supports arguments -mavx: YES 00:02:05.670 Message: lib/net: Defining dependency "net" 00:02:05.670 Message: lib/meter: Defining dependency "meter" 00:02:05.670 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.670 Message: lib/pci: Defining dependency "pci" 00:02:05.670 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.670 Message: lib/hash: Defining dependency "hash" 00:02:05.670 Message: lib/timer: Defining dependency "timer" 00:02:05.670 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.670 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.670 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.670 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.670 Message: lib/power: Defining dependency "power" 00:02:05.670 Message: lib/reorder: Defining dependency "reorder" 00:02:05.670 Message: lib/security: Defining dependency "security" 00:02:05.670 Has header "linux/userfaultfd.h" : YES 00:02:05.670 Has header "linux/vduse.h" : YES 00:02:05.670 Message: lib/vhost: Defining dependency "vhost" 00:02:05.670 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.670 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.670 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.670 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.670 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.670 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.670 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.670 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.670 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.670 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.670 Program doxygen found: YES (/usr/bin/doxygen) 00:02:05.670 Configuring doxy-api-html.conf using configuration 00:02:05.670 Configuring doxy-api-man.conf using configuration 00:02:05.670 Program mandb found: YES (/usr/bin/mandb) 00:02:05.670 Program sphinx-build found: NO 00:02:05.670 Configuring rte_build_config.h using configuration 00:02:05.670 Message: 00:02:05.670 ================= 00:02:05.670 Applications Enabled 00:02:05.670 ================= 00:02:05.670 00:02:05.670 apps: 00:02:05.670 00:02:05.670 00:02:05.670 Message: 00:02:05.670 ================= 00:02:05.670 Libraries Enabled 00:02:05.670 ================= 00:02:05.670 00:02:05.670 libs: 00:02:05.670 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.670 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.670 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.670 00:02:05.670 Message: 00:02:05.670 =============== 00:02:05.670 Drivers Enabled 00:02:05.670 =============== 00:02:05.670 00:02:05.670 common: 00:02:05.670 00:02:05.670 bus: 00:02:05.670 pci, vdev, 00:02:05.670 mempool: 00:02:05.670 ring, 00:02:05.670 dma: 00:02:05.670 00:02:05.670 net: 00:02:05.670 00:02:05.670 crypto: 00:02:05.670 00:02:05.670 compress: 00:02:05.670 00:02:05.670 vdpa: 00:02:05.670 00:02:05.670 00:02:05.670 Message: 00:02:05.670 ================= 00:02:05.670 Content Skipped 00:02:05.670 ================= 00:02:05.670 00:02:05.670 apps: 00:02:05.670 dumpcap: explicitly disabled via build config 00:02:05.670 graph: explicitly disabled via build config 00:02:05.670 pdump: explicitly disabled via build config 00:02:05.670 proc-info: explicitly disabled via build config 00:02:05.670 test-acl: explicitly disabled via build config 00:02:05.670 test-bbdev: explicitly disabled via build config 00:02:05.670 test-cmdline: explicitly disabled via build config 00:02:05.670 test-compress-perf: explicitly disabled via build config 00:02:05.670 test-crypto-perf: explicitly disabled via build config 00:02:05.670 test-dma-perf: explicitly disabled via build config 00:02:05.670 test-eventdev: explicitly disabled via build config 00:02:05.670 test-fib: explicitly disabled via build config 00:02:05.670 test-flow-perf: explicitly disabled via build config 00:02:05.670 test-gpudev: explicitly disabled via build config 00:02:05.670 test-mldev: explicitly disabled via build config 00:02:05.670 test-pipeline: explicitly disabled via build config 00:02:05.670 test-pmd: explicitly disabled via build config 00:02:05.670 test-regex: explicitly disabled via build config 00:02:05.670 test-sad: explicitly disabled via build config 00:02:05.670 test-security-perf: explicitly disabled via build config 00:02:05.670 00:02:05.670 libs: 00:02:05.670 argparse: explicitly disabled via build config 00:02:05.670 metrics: explicitly disabled via build config 00:02:05.670 acl: explicitly disabled via build config 00:02:05.670 bbdev: explicitly disabled via build config 00:02:05.670 bitratestats: explicitly disabled via build config 00:02:05.670 bpf: explicitly disabled via build config 00:02:05.670 cfgfile: explicitly disabled via build config 00:02:05.670 distributor: explicitly disabled via build config 00:02:05.670 efd: explicitly disabled via build config 00:02:05.670 eventdev: explicitly disabled via build config 00:02:05.670 dispatcher: explicitly disabled via build config 00:02:05.670 gpudev: explicitly disabled via build config 00:02:05.670 gro: explicitly disabled via build config 00:02:05.670 gso: explicitly disabled via build config 00:02:05.670 ip_frag: explicitly disabled via build config 00:02:05.670 jobstats: explicitly disabled via build config 00:02:05.670 latencystats: explicitly disabled via build config 00:02:05.670 lpm: explicitly disabled via build config 00:02:05.670 member: explicitly disabled via build config 00:02:05.670 pcapng: explicitly disabled via build config 00:02:05.670 rawdev: explicitly disabled via build config 00:02:05.670 regexdev: explicitly disabled via build config 00:02:05.670 mldev: explicitly disabled via build config 00:02:05.670 rib: explicitly disabled via build config 00:02:05.670 sched: explicitly disabled via build config 00:02:05.670 stack: explicitly disabled via build config 00:02:05.670 ipsec: explicitly disabled via build config 00:02:05.670 pdcp: explicitly disabled via build config 00:02:05.670 fib: explicitly disabled via build config 00:02:05.670 port: explicitly disabled via build config 00:02:05.670 pdump: explicitly disabled via build config 00:02:05.670 table: explicitly disabled via build config 00:02:05.670 pipeline: explicitly disabled via build config 00:02:05.670 graph: explicitly disabled via build config 00:02:05.670 node: explicitly disabled via build config 00:02:05.670 00:02:05.670 drivers: 00:02:05.670 common/cpt: not in enabled drivers build config 00:02:05.670 common/dpaax: not in enabled drivers build config 00:02:05.670 common/iavf: not in enabled drivers build config 00:02:05.670 common/idpf: not in enabled drivers build config 00:02:05.670 common/ionic: not in enabled drivers build config 00:02:05.670 common/mvep: not in enabled drivers build config 00:02:05.670 common/octeontx: not in enabled drivers build config 00:02:05.670 bus/auxiliary: not in enabled drivers build config 00:02:05.670 bus/cdx: not in enabled drivers build config 00:02:05.670 bus/dpaa: not in enabled drivers build config 00:02:05.670 bus/fslmc: not in enabled drivers build config 00:02:05.670 bus/ifpga: not in enabled drivers build config 00:02:05.670 bus/platform: not in enabled drivers build config 00:02:05.670 bus/uacce: not in enabled drivers build config 00:02:05.670 bus/vmbus: not in enabled drivers build config 00:02:05.670 common/cnxk: not in enabled drivers build config 00:02:05.670 common/mlx5: not in enabled drivers build config 00:02:05.671 common/nfp: not in enabled drivers build config 00:02:05.671 common/nitrox: not in enabled drivers build config 00:02:05.671 common/qat: not in enabled drivers build config 00:02:05.671 common/sfc_efx: not in enabled drivers build config 00:02:05.671 mempool/bucket: not in enabled drivers build config 00:02:05.671 mempool/cnxk: not in enabled drivers build config 00:02:05.671 mempool/dpaa: not in enabled drivers build config 00:02:05.671 mempool/dpaa2: not in enabled drivers build config 00:02:05.671 mempool/octeontx: not in enabled drivers build config 00:02:05.671 mempool/stack: not in enabled drivers build config 00:02:05.671 dma/cnxk: not in enabled drivers build config 00:02:05.671 dma/dpaa: not in enabled drivers build config 00:02:05.671 dma/dpaa2: not in enabled drivers build config 00:02:05.671 dma/hisilicon: not in enabled drivers build config 00:02:05.671 dma/idxd: not in enabled drivers build config 00:02:05.671 dma/ioat: not in enabled drivers build config 00:02:05.671 dma/skeleton: not in enabled drivers build config 00:02:05.671 net/af_packet: not in enabled drivers build config 00:02:05.671 net/af_xdp: not in enabled drivers build config 00:02:05.671 net/ark: not in enabled drivers build config 00:02:05.671 net/atlantic: not in enabled drivers build config 00:02:05.671 net/avp: not in enabled drivers build config 00:02:05.671 net/axgbe: not in enabled drivers build config 00:02:05.671 net/bnx2x: not in enabled drivers build config 00:02:05.671 net/bnxt: not in enabled drivers build config 00:02:05.671 net/bonding: not in enabled drivers build config 00:02:05.671 net/cnxk: not in enabled drivers build config 00:02:05.671 net/cpfl: not in enabled drivers build config 00:02:05.671 net/cxgbe: not in enabled drivers build config 00:02:05.671 net/dpaa: not in enabled drivers build config 00:02:05.671 net/dpaa2: not in enabled drivers build config 00:02:05.671 net/e1000: not in enabled drivers build config 00:02:05.671 net/ena: not in enabled drivers build config 00:02:05.671 net/enetc: not in enabled drivers build config 00:02:05.671 net/enetfec: not in enabled drivers build config 00:02:05.671 net/enic: not in enabled drivers build config 00:02:05.671 net/failsafe: not in enabled drivers build config 00:02:05.671 net/fm10k: not in enabled drivers build config 00:02:05.671 net/gve: not in enabled drivers build config 00:02:05.671 net/hinic: not in enabled drivers build config 00:02:05.671 net/hns3: not in enabled drivers build config 00:02:05.671 net/i40e: not in enabled drivers build config 00:02:05.671 net/iavf: not in enabled drivers build config 00:02:05.671 net/ice: not in enabled drivers build config 00:02:05.671 net/idpf: not in enabled drivers build config 00:02:05.671 net/igc: not in enabled drivers build config 00:02:05.671 net/ionic: not in enabled drivers build config 00:02:05.671 net/ipn3ke: not in enabled drivers build config 00:02:05.671 net/ixgbe: not in enabled drivers build config 00:02:05.671 net/mana: not in enabled drivers build config 00:02:05.671 net/memif: not in enabled drivers build config 00:02:05.671 net/mlx4: not in enabled drivers build config 00:02:05.671 net/mlx5: not in enabled drivers build config 00:02:05.671 net/mvneta: not in enabled drivers build config 00:02:05.671 net/mvpp2: not in enabled drivers build config 00:02:05.671 net/netvsc: not in enabled drivers build config 00:02:05.671 net/nfb: not in enabled drivers build config 00:02:05.671 net/nfp: not in enabled drivers build config 00:02:05.671 net/ngbe: not in enabled drivers build config 00:02:05.671 net/null: not in enabled drivers build config 00:02:05.671 net/octeontx: not in enabled drivers build config 00:02:05.671 net/octeon_ep: not in enabled drivers build config 00:02:05.671 net/pcap: not in enabled drivers build config 00:02:05.671 net/pfe: not in enabled drivers build config 00:02:05.671 net/qede: not in enabled drivers build config 00:02:05.671 net/ring: not in enabled drivers build config 00:02:05.671 net/sfc: not in enabled drivers build config 00:02:05.671 net/softnic: not in enabled drivers build config 00:02:05.671 net/tap: not in enabled drivers build config 00:02:05.671 net/thunderx: not in enabled drivers build config 00:02:05.671 net/txgbe: not in enabled drivers build config 00:02:05.671 net/vdev_netvsc: not in enabled drivers build config 00:02:05.671 net/vhost: not in enabled drivers build config 00:02:05.671 net/virtio: not in enabled drivers build config 00:02:05.671 net/vmxnet3: not in enabled drivers build config 00:02:05.671 raw/*: missing internal dependency, "rawdev" 00:02:05.671 crypto/armv8: not in enabled drivers build config 00:02:05.671 crypto/bcmfs: not in enabled drivers build config 00:02:05.671 crypto/caam_jr: not in enabled drivers build config 00:02:05.671 crypto/ccp: not in enabled drivers build config 00:02:05.671 crypto/cnxk: not in enabled drivers build config 00:02:05.671 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.671 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.671 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.671 crypto/mlx5: not in enabled drivers build config 00:02:05.671 crypto/mvsam: not in enabled drivers build config 00:02:05.671 crypto/nitrox: not in enabled drivers build config 00:02:05.671 crypto/null: not in enabled drivers build config 00:02:05.671 crypto/octeontx: not in enabled drivers build config 00:02:05.671 crypto/openssl: not in enabled drivers build config 00:02:05.671 crypto/scheduler: not in enabled drivers build config 00:02:05.671 crypto/uadk: not in enabled drivers build config 00:02:05.671 crypto/virtio: not in enabled drivers build config 00:02:05.671 compress/isal: not in enabled drivers build config 00:02:05.671 compress/mlx5: not in enabled drivers build config 00:02:05.671 compress/nitrox: not in enabled drivers build config 00:02:05.671 compress/octeontx: not in enabled drivers build config 00:02:05.671 compress/zlib: not in enabled drivers build config 00:02:05.671 regex/*: missing internal dependency, "regexdev" 00:02:05.671 ml/*: missing internal dependency, "mldev" 00:02:05.671 vdpa/ifc: not in enabled drivers build config 00:02:05.671 vdpa/mlx5: not in enabled drivers build config 00:02:05.671 vdpa/nfp: not in enabled drivers build config 00:02:05.671 vdpa/sfc: not in enabled drivers build config 00:02:05.671 event/*: missing internal dependency, "eventdev" 00:02:05.671 baseband/*: missing internal dependency, "bbdev" 00:02:05.671 gpu/*: missing internal dependency, "gpudev" 00:02:05.671 00:02:05.671 00:02:05.671 Build targets in project: 84 00:02:05.671 00:02:05.671 DPDK 24.03.0 00:02:05.671 00:02:05.671 User defined options 00:02:05.671 buildtype : debug 00:02:05.671 default_library : shared 00:02:05.671 libdir : lib 00:02:05.671 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:05.671 b_sanitize : address 00:02:05.671 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.671 c_link_args : 00:02:05.671 cpu_instruction_set: native 00:02:05.671 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:02:05.671 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:02:05.671 enable_docs : false 00:02:05.671 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:05.671 enable_kmods : false 00:02:05.671 max_lcores : 128 00:02:05.671 tests : false 00:02:05.671 00:02:05.671 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.671 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:05.671 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.671 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:05.671 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.671 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.671 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.671 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.671 [7/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.671 [8/267] Linking static target lib/librte_kvargs.a 00:02:05.671 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.671 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:05.671 [11/267] Linking static target lib/librte_log.a 00:02:05.671 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.671 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.671 [14/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.671 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.671 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.671 [17/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:05.671 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.671 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:05.671 [20/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.671 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:05.671 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:05.671 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:05.671 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:05.671 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:05.671 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:05.671 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:05.671 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:05.671 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:05.671 [30/267] Linking static target lib/librte_pci.a 00:02:05.671 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:05.671 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:05.671 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:05.671 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:05.671 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:05.671 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:05.671 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.671 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:05.672 [39/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.672 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.672 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.672 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.672 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.672 [44/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.672 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.672 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.672 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.672 [48/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:05.672 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.672 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.672 [51/267] Linking static target lib/librte_meter.a 00:02:05.672 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.672 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.672 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.672 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.672 [56/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:05.672 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.672 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.672 [59/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.672 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.672 [61/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:05.672 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.672 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.672 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.672 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.672 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.672 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.672 [68/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.672 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.672 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.672 [71/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:05.672 [72/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:05.672 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.672 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.672 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.672 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.672 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.672 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:05.672 [79/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:05.672 [80/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:05.672 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.672 [82/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:05.672 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.672 [84/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:05.672 [85/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:05.672 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.672 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.672 [88/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.672 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:05.672 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.672 [91/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:05.672 [92/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:05.672 [93/267] Linking static target lib/librte_telemetry.a 00:02:05.672 [94/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:05.672 [95/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:05.672 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:05.672 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.672 [98/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.672 [99/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.672 [100/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:05.672 [101/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.672 [102/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.672 [103/267] Linking static target lib/librte_dmadev.a 00:02:05.672 [104/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:05.672 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.672 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.672 [107/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:05.672 [108/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:05.672 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.672 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.672 [111/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:05.672 [112/267] Linking static target lib/librte_cmdline.a 00:02:05.672 [113/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:05.672 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:05.672 [115/267] Linking target lib/librte_log.so.24.1 00:02:05.672 [116/267] Linking static target lib/librte_ring.a 00:02:05.672 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:05.672 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.672 [119/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.672 [120/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:05.672 [121/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:05.672 [122/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:05.672 [123/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:05.672 [124/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:05.672 [125/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:05.672 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:05.672 [127/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.672 [128/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.672 [129/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.672 [130/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:05.672 [131/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.672 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:05.672 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:05.672 [134/267] Linking static target lib/librte_timer.a 00:02:05.672 [135/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:05.672 [136/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:05.672 [137/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:05.672 [138/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:05.672 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.672 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.672 [141/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.672 [142/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.672 [143/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.672 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.672 [145/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:05.672 [146/267] Linking static target lib/librte_mempool.a 00:02:05.672 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.672 [148/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:05.672 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:05.672 [150/267] Linking static target lib/librte_power.a 00:02:05.672 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:05.672 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.672 [153/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:05.672 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:05.672 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.672 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.672 [157/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.672 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:05.672 [159/267] Linking static target lib/librte_rcu.a 00:02:05.672 [160/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:05.672 [161/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:05.672 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.672 [163/267] Linking static target lib/librte_net.a 00:02:05.672 [164/267] Linking target lib/librte_kvargs.so.24.1 00:02:05.672 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:05.672 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.672 [167/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.672 [168/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:05.672 [169/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.672 [170/267] Linking static target lib/librte_compressdev.a 00:02:05.672 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:05.672 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:05.672 [173/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.672 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.672 [175/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:05.672 [176/267] Linking static target lib/librte_eal.a 00:02:05.672 [177/267] Linking static target lib/librte_reorder.a 00:02:05.672 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:05.672 [179/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.672 [180/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:05.673 [181/267] Linking static target lib/librte_security.a 00:02:05.673 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.673 [183/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.673 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.933 [185/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.933 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.933 [187/267] Linking static target drivers/librte_bus_vdev.a 00:02:05.933 [188/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.933 [189/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.933 [190/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.933 [191/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.933 [192/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.933 [193/267] Linking static target drivers/librte_bus_pci.a 00:02:05.933 [194/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.933 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.933 [196/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.933 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.933 [198/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.933 [199/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.933 [200/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.933 [201/267] Linking target lib/librte_telemetry.so.24.1 00:02:05.933 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.933 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:06.194 [204/267] Linking static target drivers/librte_mempool_ring.a 00:02:06.194 [205/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:06.194 [206/267] Linking static target lib/librte_mbuf.a 00:02:06.194 [207/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:06.194 [208/267] Linking static target lib/librte_hash.a 00:02:06.194 [209/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.194 [210/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.194 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.194 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.456 [213/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:06.456 [214/267] Linking static target lib/librte_cryptodev.a 00:02:06.456 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.456 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.456 [217/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.456 [218/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.456 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:06.717 [220/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.717 [221/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.978 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.239 [223/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.239 [224/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:07.239 [225/267] Linking static target lib/librte_ethdev.a 00:02:07.501 [226/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:08.446 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.752 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:11.752 [229/267] Linking static target lib/librte_vhost.a 00:02:13.713 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.058 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.319 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.319 [233/267] Linking target lib/librte_eal.so.24.1 00:02:17.580 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.580 [235/267] Linking target lib/librte_meter.so.24.1 00:02:17.580 [236/267] Linking target lib/librte_ring.so.24.1 00:02:17.580 [237/267] Linking target lib/librte_timer.so.24.1 00:02:17.580 [238/267] Linking target lib/librte_pci.so.24.1 00:02:17.580 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:17.580 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.842 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.842 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.842 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.842 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.842 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.842 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:17.842 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:17.842 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.842 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.842 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:18.103 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:18.103 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:18.103 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.364 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:18.364 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:18.364 [256/267] Linking target lib/librte_net.so.24.1 00:02:18.364 [257/267] Linking target lib/librte_reorder.so.24.1 00:02:18.364 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:18.364 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:18.364 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:18.364 [261/267] Linking target lib/librte_hash.so.24.1 00:02:18.364 [262/267] Linking target lib/librte_security.so.24.1 00:02:18.364 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:18.750 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.750 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.750 [266/267] Linking target lib/librte_power.so.24.1 00:02:18.750 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:18.750 INFO: autodetecting backend as ninja 00:02:18.750 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:19.692 CC lib/ut_mock/mock.o 00:02:19.692 CC lib/log/log.o 00:02:19.692 CC lib/log/log_flags.o 00:02:19.692 CC lib/log/log_deprecated.o 00:02:19.692 CC lib/ut/ut.o 00:02:19.953 LIB libspdk_ut_mock.a 00:02:19.953 LIB libspdk_log.a 00:02:19.953 LIB libspdk_ut.a 00:02:19.953 SO libspdk_ut_mock.so.6.0 00:02:19.953 SO libspdk_log.so.7.0 00:02:19.953 SO libspdk_ut.so.2.0 00:02:20.214 SYMLINK libspdk_ut_mock.so 00:02:20.214 SYMLINK libspdk_log.so 00:02:20.214 SYMLINK libspdk_ut.so 00:02:20.476 CC lib/util/base64.o 00:02:20.476 CC lib/util/bit_array.o 00:02:20.476 CXX lib/trace_parser/trace.o 00:02:20.476 CC lib/util/cpuset.o 00:02:20.476 CC lib/util/crc16.o 00:02:20.476 CC lib/util/crc32.o 00:02:20.476 CC lib/dma/dma.o 00:02:20.476 CC lib/util/crc32c.o 00:02:20.476 CC lib/ioat/ioat.o 00:02:20.476 CC lib/util/crc32_ieee.o 00:02:20.476 CC lib/util/crc64.o 00:02:20.476 CC lib/util/dif.o 00:02:20.476 CC lib/util/fd.o 00:02:20.476 CC lib/util/fd_group.o 00:02:20.476 CC lib/util/file.o 00:02:20.476 CC lib/util/hexlify.o 00:02:20.476 CC lib/util/iov.o 00:02:20.476 CC lib/util/net.o 00:02:20.476 CC lib/util/math.o 00:02:20.476 CC lib/util/pipe.o 00:02:20.476 CC lib/util/strerror_tls.o 00:02:20.476 CC lib/util/string.o 00:02:20.476 CC lib/util/zipf.o 00:02:20.476 CC lib/util/uuid.o 00:02:20.476 CC lib/util/xor.o 00:02:20.738 CC lib/vfio_user/host/vfio_user_pci.o 00:02:20.738 CC lib/vfio_user/host/vfio_user.o 00:02:20.738 LIB libspdk_dma.a 00:02:20.738 SO libspdk_dma.so.4.0 00:02:20.738 LIB libspdk_ioat.a 00:02:20.738 SYMLINK libspdk_dma.so 00:02:20.738 SO libspdk_ioat.so.7.0 00:02:20.999 LIB libspdk_vfio_user.a 00:02:20.999 SYMLINK libspdk_ioat.so 00:02:20.999 SO libspdk_vfio_user.so.5.0 00:02:20.999 SYMLINK libspdk_vfio_user.so 00:02:21.261 LIB libspdk_util.a 00:02:21.261 SO libspdk_util.so.10.0 00:02:21.522 SYMLINK libspdk_util.so 00:02:21.522 LIB libspdk_trace_parser.a 00:02:21.522 SO libspdk_trace_parser.so.5.0 00:02:21.522 SYMLINK libspdk_trace_parser.so 00:02:21.780 CC lib/conf/conf.o 00:02:21.780 CC lib/rdma_provider/common.o 00:02:21.780 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:21.780 CC lib/json/json_parse.o 00:02:21.780 CC lib/json/json_util.o 00:02:21.780 CC lib/json/json_write.o 00:02:21.780 CC lib/idxd/idxd.o 00:02:21.780 CC lib/vmd/vmd.o 00:02:21.780 CC lib/env_dpdk/env.o 00:02:21.780 CC lib/idxd/idxd_user.o 00:02:21.780 CC lib/vmd/led.o 00:02:21.780 CC lib/env_dpdk/memory.o 00:02:21.780 CC lib/idxd/idxd_kernel.o 00:02:21.780 CC lib/env_dpdk/pci.o 00:02:21.780 CC lib/rdma_utils/rdma_utils.o 00:02:21.780 CC lib/env_dpdk/init.o 00:02:21.780 CC lib/env_dpdk/pci_ioat.o 00:02:21.780 CC lib/env_dpdk/threads.o 00:02:21.780 CC lib/env_dpdk/pci_virtio.o 00:02:21.780 CC lib/env_dpdk/pci_vmd.o 00:02:21.780 CC lib/env_dpdk/pci_idxd.o 00:02:21.780 CC lib/env_dpdk/pci_event.o 00:02:21.780 CC lib/env_dpdk/sigbus_handler.o 00:02:21.780 CC lib/env_dpdk/pci_dpdk.o 00:02:21.780 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:21.780 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:22.039 LIB libspdk_rdma_provider.a 00:02:22.039 LIB libspdk_conf.a 00:02:22.039 SO libspdk_rdma_provider.so.6.0 00:02:22.039 SO libspdk_conf.so.6.0 00:02:22.039 LIB libspdk_rdma_utils.a 00:02:22.039 SYMLINK libspdk_rdma_provider.so 00:02:22.039 SO libspdk_rdma_utils.so.1.0 00:02:22.040 LIB libspdk_json.a 00:02:22.040 SYMLINK libspdk_conf.so 00:02:22.040 SO libspdk_json.so.6.0 00:02:22.300 SYMLINK libspdk_rdma_utils.so 00:02:22.300 SYMLINK libspdk_json.so 00:02:22.300 LIB libspdk_idxd.a 00:02:22.562 LIB libspdk_vmd.a 00:02:22.562 SO libspdk_idxd.so.12.0 00:02:22.562 SO libspdk_vmd.so.6.0 00:02:22.562 SYMLINK libspdk_idxd.so 00:02:22.562 CC lib/jsonrpc/jsonrpc_server.o 00:02:22.562 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:22.562 CC lib/jsonrpc/jsonrpc_client.o 00:02:22.562 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:22.562 SYMLINK libspdk_vmd.so 00:02:22.823 LIB libspdk_jsonrpc.a 00:02:22.823 SO libspdk_jsonrpc.so.6.0 00:02:23.084 SYMLINK libspdk_jsonrpc.so 00:02:23.345 LIB libspdk_env_dpdk.a 00:02:23.345 CC lib/rpc/rpc.o 00:02:23.345 SO libspdk_env_dpdk.so.15.0 00:02:23.608 SYMLINK libspdk_env_dpdk.so 00:02:23.608 LIB libspdk_rpc.a 00:02:23.608 SO libspdk_rpc.so.6.0 00:02:23.608 SYMLINK libspdk_rpc.so 00:02:24.182 CC lib/keyring/keyring.o 00:02:24.182 CC lib/trace/trace.o 00:02:24.182 CC lib/trace/trace_flags.o 00:02:24.182 CC lib/keyring/keyring_rpc.o 00:02:24.182 CC lib/trace/trace_rpc.o 00:02:24.182 CC lib/notify/notify.o 00:02:24.182 CC lib/notify/notify_rpc.o 00:02:24.182 LIB libspdk_notify.a 00:02:24.182 SO libspdk_notify.so.6.0 00:02:24.444 LIB libspdk_keyring.a 00:02:24.444 LIB libspdk_trace.a 00:02:24.444 SYMLINK libspdk_notify.so 00:02:24.444 SO libspdk_keyring.so.1.0 00:02:24.444 SO libspdk_trace.so.10.0 00:02:24.444 SYMLINK libspdk_keyring.so 00:02:24.444 SYMLINK libspdk_trace.so 00:02:24.705 CC lib/thread/thread.o 00:02:24.705 CC lib/sock/sock.o 00:02:24.705 CC lib/thread/iobuf.o 00:02:24.705 CC lib/sock/sock_rpc.o 00:02:25.279 LIB libspdk_sock.a 00:02:25.279 SO libspdk_sock.so.10.0 00:02:25.279 SYMLINK libspdk_sock.so 00:02:25.850 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:25.850 CC lib/nvme/nvme_ctrlr.o 00:02:25.850 CC lib/nvme/nvme_fabric.o 00:02:25.850 CC lib/nvme/nvme_ns_cmd.o 00:02:25.850 CC lib/nvme/nvme_ns.o 00:02:25.850 CC lib/nvme/nvme_pcie_common.o 00:02:25.850 CC lib/nvme/nvme_pcie.o 00:02:25.850 CC lib/nvme/nvme_qpair.o 00:02:25.850 CC lib/nvme/nvme.o 00:02:25.850 CC lib/nvme/nvme_quirks.o 00:02:25.851 CC lib/nvme/nvme_transport.o 00:02:25.851 CC lib/nvme/nvme_discovery.o 00:02:25.851 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.851 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.851 CC lib/nvme/nvme_tcp.o 00:02:25.851 CC lib/nvme/nvme_opal.o 00:02:25.851 CC lib/nvme/nvme_io_msg.o 00:02:25.851 CC lib/nvme/nvme_poll_group.o 00:02:25.851 CC lib/nvme/nvme_zns.o 00:02:25.851 CC lib/nvme/nvme_stubs.o 00:02:25.851 CC lib/nvme/nvme_auth.o 00:02:25.851 CC lib/nvme/nvme_cuse.o 00:02:25.851 CC lib/nvme/nvme_rdma.o 00:02:26.421 LIB libspdk_thread.a 00:02:26.421 SO libspdk_thread.so.10.1 00:02:26.682 SYMLINK libspdk_thread.so 00:02:26.943 CC lib/accel/accel.o 00:02:26.943 CC lib/accel/accel_rpc.o 00:02:26.943 CC lib/accel/accel_sw.o 00:02:26.943 CC lib/init/json_config.o 00:02:26.943 CC lib/init/subsystem.o 00:02:26.943 CC lib/init/subsystem_rpc.o 00:02:26.943 CC lib/init/rpc.o 00:02:26.943 CC lib/blob/blobstore.o 00:02:26.943 CC lib/blob/request.o 00:02:26.943 CC lib/blob/zeroes.o 00:02:26.943 CC lib/blob/blob_bs_dev.o 00:02:26.943 CC lib/virtio/virtio.o 00:02:26.943 CC lib/virtio/virtio_vhost_user.o 00:02:26.943 CC lib/virtio/virtio_vfio_user.o 00:02:26.943 CC lib/virtio/virtio_pci.o 00:02:27.205 LIB libspdk_init.a 00:02:27.205 SO libspdk_init.so.5.0 00:02:27.205 LIB libspdk_virtio.a 00:02:27.205 SYMLINK libspdk_init.so 00:02:27.205 SO libspdk_virtio.so.7.0 00:02:27.466 SYMLINK libspdk_virtio.so 00:02:27.727 CC lib/event/app.o 00:02:27.727 CC lib/event/reactor.o 00:02:27.727 CC lib/event/log_rpc.o 00:02:27.727 CC lib/event/app_rpc.o 00:02:27.728 CC lib/event/scheduler_static.o 00:02:27.989 LIB libspdk_accel.a 00:02:27.989 LIB libspdk_nvme.a 00:02:27.989 SO libspdk_accel.so.16.0 00:02:27.989 SYMLINK libspdk_accel.so 00:02:27.989 SO libspdk_nvme.so.13.1 00:02:27.989 LIB libspdk_event.a 00:02:28.251 SO libspdk_event.so.14.0 00:02:28.251 SYMLINK libspdk_event.so 00:02:28.251 CC lib/bdev/bdev.o 00:02:28.251 CC lib/bdev/bdev_rpc.o 00:02:28.251 CC lib/bdev/bdev_zone.o 00:02:28.251 CC lib/bdev/part.o 00:02:28.251 CC lib/bdev/scsi_nvme.o 00:02:28.512 SYMLINK libspdk_nvme.so 00:02:30.428 LIB libspdk_blob.a 00:02:30.428 SO libspdk_blob.so.11.0 00:02:30.428 SYMLINK libspdk_blob.so 00:02:30.688 CC lib/blobfs/blobfs.o 00:02:30.688 CC lib/blobfs/tree.o 00:02:30.689 CC lib/lvol/lvol.o 00:02:31.260 LIB libspdk_bdev.a 00:02:31.260 SO libspdk_bdev.so.16.0 00:02:31.522 SYMLINK libspdk_bdev.so 00:02:31.826 LIB libspdk_blobfs.a 00:02:31.826 SO libspdk_blobfs.so.10.0 00:02:31.826 CC lib/scsi/dev.o 00:02:31.826 CC lib/scsi/port.o 00:02:31.826 CC lib/scsi/lun.o 00:02:31.826 CC lib/scsi/scsi.o 00:02:31.826 CC lib/scsi/scsi_bdev.o 00:02:31.826 CC lib/scsi/scsi_pr.o 00:02:31.826 CC lib/ftl/ftl_core.o 00:02:31.826 CC lib/scsi/scsi_rpc.o 00:02:31.826 CC lib/ftl/ftl_init.o 00:02:31.826 CC lib/scsi/task.o 00:02:31.826 CC lib/ftl/ftl_layout.o 00:02:31.826 CC lib/ftl/ftl_debug.o 00:02:31.826 CC lib/nvmf/ctrlr.o 00:02:31.826 CC lib/nvmf/ctrlr_bdev.o 00:02:31.826 CC lib/ftl/ftl_io.o 00:02:31.826 CC lib/nvmf/ctrlr_discovery.o 00:02:31.826 CC lib/ftl/ftl_sb.o 00:02:31.826 CC lib/ftl/ftl_l2p.o 00:02:31.826 CC lib/nvmf/subsystem.o 00:02:31.826 CC lib/ftl/ftl_l2p_flat.o 00:02:31.826 CC lib/nvmf/nvmf.o 00:02:31.826 CC lib/ftl/ftl_nv_cache.o 00:02:31.826 CC lib/nbd/nbd.o 00:02:31.826 CC lib/nvmf/nvmf_rpc.o 00:02:31.826 CC lib/ftl/ftl_band.o 00:02:31.826 CC lib/nbd/nbd_rpc.o 00:02:31.826 CC lib/nvmf/transport.o 00:02:31.826 CC lib/ftl/ftl_band_ops.o 00:02:31.826 CC lib/ublk/ublk.o 00:02:31.826 CC lib/nvmf/tcp.o 00:02:31.826 CC lib/ftl/ftl_writer.o 00:02:31.826 CC lib/ublk/ublk_rpc.o 00:02:31.826 CC lib/ftl/ftl_rq.o 00:02:31.826 CC lib/nvmf/stubs.o 00:02:31.826 CC lib/nvmf/mdns_server.o 00:02:31.826 CC lib/ftl/ftl_reloc.o 00:02:31.826 CC lib/ftl/ftl_l2p_cache.o 00:02:31.826 CC lib/nvmf/rdma.o 00:02:31.826 CC lib/nvmf/auth.o 00:02:31.826 CC lib/ftl/ftl_p2l.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:31.826 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:31.826 CC lib/ftl/utils/ftl_conf.o 00:02:31.826 CC lib/ftl/utils/ftl_md.o 00:02:31.826 CC lib/ftl/utils/ftl_mempool.o 00:02:31.826 CC lib/ftl/utils/ftl_property.o 00:02:31.826 CC lib/ftl/utils/ftl_bitmap.o 00:02:31.826 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:31.826 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:31.826 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:31.826 SYMLINK libspdk_blobfs.so 00:02:31.826 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:31.826 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:31.826 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:31.826 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:31.826 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:31.826 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:31.826 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:31.826 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:31.826 CC lib/ftl/base/ftl_base_dev.o 00:02:31.827 LIB libspdk_lvol.a 00:02:31.827 CC lib/ftl/ftl_trace.o 00:02:31.827 CC lib/ftl/base/ftl_base_bdev.o 00:02:31.827 SO libspdk_lvol.so.10.0 00:02:32.088 SYMLINK libspdk_lvol.so 00:02:32.348 LIB libspdk_nbd.a 00:02:32.348 SO libspdk_nbd.so.7.0 00:02:32.348 LIB libspdk_scsi.a 00:02:32.348 SYMLINK libspdk_nbd.so 00:02:32.348 SO libspdk_scsi.so.9.0 00:02:32.608 LIB libspdk_ublk.a 00:02:32.608 SYMLINK libspdk_scsi.so 00:02:32.608 SO libspdk_ublk.so.3.0 00:02:32.608 SYMLINK libspdk_ublk.so 00:02:32.869 CC lib/iscsi/conn.o 00:02:32.869 CC lib/iscsi/init_grp.o 00:02:32.869 CC lib/iscsi/iscsi.o 00:02:32.869 CC lib/iscsi/md5.o 00:02:32.869 CC lib/iscsi/param.o 00:02:32.869 CC lib/iscsi/portal_grp.o 00:02:32.869 CC lib/iscsi/tgt_node.o 00:02:32.869 CC lib/iscsi/iscsi_rpc.o 00:02:32.869 CC lib/iscsi/iscsi_subsystem.o 00:02:32.869 CC lib/iscsi/task.o 00:02:32.869 CC lib/vhost/vhost.o 00:02:32.869 CC lib/vhost/vhost_rpc.o 00:02:32.869 CC lib/vhost/vhost_scsi.o 00:02:32.869 CC lib/vhost/vhost_blk.o 00:02:32.869 CC lib/vhost/rte_vhost_user.o 00:02:33.129 LIB libspdk_ftl.a 00:02:33.129 SO libspdk_ftl.so.9.0 00:02:33.699 SYMLINK libspdk_ftl.so 00:02:33.959 LIB libspdk_vhost.a 00:02:33.959 SO libspdk_vhost.so.8.0 00:02:34.218 SYMLINK libspdk_vhost.so 00:02:34.218 LIB libspdk_nvmf.a 00:02:34.218 SO libspdk_nvmf.so.19.0 00:02:34.479 LIB libspdk_iscsi.a 00:02:34.479 SO libspdk_iscsi.so.8.0 00:02:34.479 SYMLINK libspdk_nvmf.so 00:02:34.739 SYMLINK libspdk_iscsi.so 00:02:35.310 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.310 CC module/accel/error/accel_error.o 00:02:35.572 CC module/accel/error/accel_error_rpc.o 00:02:35.572 CC module/accel/ioat/accel_ioat.o 00:02:35.572 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.572 CC module/accel/iaa/accel_iaa.o 00:02:35.572 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.572 LIB libspdk_env_dpdk_rpc.a 00:02:35.572 CC module/accel/dsa/accel_dsa.o 00:02:35.572 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.572 CC module/keyring/file/keyring.o 00:02:35.572 CC module/blob/bdev/blob_bdev.o 00:02:35.572 CC module/keyring/file/keyring_rpc.o 00:02:35.572 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.572 CC module/sock/posix/posix.o 00:02:35.572 CC module/keyring/linux/keyring.o 00:02:35.572 CC module/keyring/linux/keyring_rpc.o 00:02:35.572 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.572 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.572 SO libspdk_env_dpdk_rpc.so.6.0 00:02:35.572 SYMLINK libspdk_env_dpdk_rpc.so 00:02:35.572 LIB libspdk_keyring_file.a 00:02:35.572 LIB libspdk_accel_ioat.a 00:02:35.572 LIB libspdk_scheduler_dpdk_governor.a 00:02:35.572 LIB libspdk_scheduler_gscheduler.a 00:02:35.572 LIB libspdk_keyring_linux.a 00:02:35.572 LIB libspdk_accel_error.a 00:02:35.572 LIB libspdk_accel_iaa.a 00:02:35.572 SO libspdk_keyring_file.so.1.0 00:02:35.832 LIB libspdk_scheduler_dynamic.a 00:02:35.832 SO libspdk_accel_ioat.so.6.0 00:02:35.832 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:35.832 SO libspdk_scheduler_gscheduler.so.4.0 00:02:35.832 SO libspdk_keyring_linux.so.1.0 00:02:35.832 SO libspdk_accel_error.so.2.0 00:02:35.832 SO libspdk_accel_iaa.so.3.0 00:02:35.832 SO libspdk_scheduler_dynamic.so.4.0 00:02:35.832 SYMLINK libspdk_keyring_file.so 00:02:35.832 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:35.832 LIB libspdk_accel_dsa.a 00:02:35.832 LIB libspdk_blob_bdev.a 00:02:35.832 SYMLINK libspdk_accel_ioat.so 00:02:35.832 SYMLINK libspdk_scheduler_gscheduler.so 00:02:35.832 SYMLINK libspdk_accel_error.so 00:02:35.832 SYMLINK libspdk_keyring_linux.so 00:02:35.832 SO libspdk_blob_bdev.so.11.0 00:02:35.832 SYMLINK libspdk_accel_iaa.so 00:02:35.832 SYMLINK libspdk_scheduler_dynamic.so 00:02:35.832 SO libspdk_accel_dsa.so.5.0 00:02:35.832 SYMLINK libspdk_blob_bdev.so 00:02:35.832 SYMLINK libspdk_accel_dsa.so 00:02:36.403 LIB libspdk_sock_posix.a 00:02:36.403 SO libspdk_sock_posix.so.6.0 00:02:36.403 CC module/bdev/ftl/bdev_ftl.o 00:02:36.403 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:36.403 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.403 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.403 CC module/bdev/error/vbdev_error.o 00:02:36.403 CC module/bdev/delay/vbdev_delay.o 00:02:36.403 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.403 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.403 CC module/bdev/nvme/bdev_nvme.o 00:02:36.403 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:36.403 CC module/bdev/nvme/nvme_rpc.o 00:02:36.403 CC module/bdev/nvme/bdev_mdns_client.o 00:02:36.403 CC module/bdev/nvme/vbdev_opal.o 00:02:36.403 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:36.403 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:36.403 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:36.403 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:36.403 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:36.403 CC module/bdev/malloc/bdev_malloc.o 00:02:36.403 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.403 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:36.403 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:36.403 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.403 CC module/bdev/null/bdev_null.o 00:02:36.403 CC module/bdev/gpt/gpt.o 00:02:36.403 CC module/bdev/raid/bdev_raid.o 00:02:36.403 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:36.403 CC module/bdev/split/vbdev_split.o 00:02:36.403 CC module/bdev/split/vbdev_split_rpc.o 00:02:36.403 CC module/bdev/null/bdev_null_rpc.o 00:02:36.403 CC module/bdev/raid/bdev_raid_sb.o 00:02:36.403 CC module/bdev/raid/bdev_raid_rpc.o 00:02:36.403 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.403 CC module/bdev/raid/raid0.o 00:02:36.403 CC module/bdev/passthru/vbdev_passthru.o 00:02:36.403 CC module/bdev/raid/raid1.o 00:02:36.403 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:36.403 CC module/bdev/raid/concat.o 00:02:36.403 CC module/bdev/iscsi/bdev_iscsi.o 00:02:36.403 CC module/bdev/aio/bdev_aio.o 00:02:36.403 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:36.403 CC module/bdev/aio/bdev_aio_rpc.o 00:02:36.403 SYMLINK libspdk_sock_posix.so 00:02:36.663 LIB libspdk_blobfs_bdev.a 00:02:36.663 SO libspdk_blobfs_bdev.so.6.0 00:02:36.663 LIB libspdk_bdev_error.a 00:02:36.663 LIB libspdk_bdev_split.a 00:02:36.663 LIB libspdk_bdev_ftl.a 00:02:36.663 SYMLINK libspdk_blobfs_bdev.so 00:02:36.663 LIB libspdk_bdev_zone_block.a 00:02:36.663 SO libspdk_bdev_error.so.6.0 00:02:36.663 SO libspdk_bdev_split.so.6.0 00:02:36.663 LIB libspdk_bdev_gpt.a 00:02:36.663 SO libspdk_bdev_ftl.so.6.0 00:02:36.924 SO libspdk_bdev_zone_block.so.6.0 00:02:36.924 LIB libspdk_bdev_passthru.a 00:02:36.924 SO libspdk_bdev_gpt.so.6.0 00:02:36.924 SYMLINK libspdk_bdev_split.so 00:02:36.924 LIB libspdk_bdev_delay.a 00:02:36.925 SYMLINK libspdk_bdev_error.so 00:02:36.925 LIB libspdk_bdev_aio.a 00:02:36.925 SO libspdk_bdev_passthru.so.6.0 00:02:36.925 SYMLINK libspdk_bdev_ftl.so 00:02:36.925 LIB libspdk_bdev_null.a 00:02:36.925 SO libspdk_bdev_delay.so.6.0 00:02:36.925 LIB libspdk_bdev_iscsi.a 00:02:36.925 LIB libspdk_bdev_malloc.a 00:02:36.925 SO libspdk_bdev_aio.so.6.0 00:02:36.925 SYMLINK libspdk_bdev_gpt.so 00:02:36.925 SYMLINK libspdk_bdev_zone_block.so 00:02:36.925 SO libspdk_bdev_null.so.6.0 00:02:36.925 SO libspdk_bdev_iscsi.so.6.0 00:02:36.925 SO libspdk_bdev_malloc.so.6.0 00:02:36.925 SYMLINK libspdk_bdev_passthru.so 00:02:36.925 SYMLINK libspdk_bdev_delay.so 00:02:36.925 SYMLINK libspdk_bdev_aio.so 00:02:36.925 SYMLINK libspdk_bdev_null.so 00:02:36.925 SYMLINK libspdk_bdev_iscsi.so 00:02:36.925 SYMLINK libspdk_bdev_malloc.so 00:02:36.925 LIB libspdk_bdev_lvol.a 00:02:36.925 LIB libspdk_bdev_virtio.a 00:02:36.925 SO libspdk_bdev_lvol.so.6.0 00:02:36.925 SO libspdk_bdev_virtio.so.6.0 00:02:37.185 SYMLINK libspdk_bdev_lvol.so 00:02:37.186 SYMLINK libspdk_bdev_virtio.so 00:02:37.447 LIB libspdk_bdev_raid.a 00:02:37.708 SO libspdk_bdev_raid.so.6.0 00:02:37.708 SYMLINK libspdk_bdev_raid.so 00:02:39.093 LIB libspdk_bdev_nvme.a 00:02:39.093 SO libspdk_bdev_nvme.so.7.0 00:02:39.093 SYMLINK libspdk_bdev_nvme.so 00:02:39.665 CC module/event/subsystems/sock/sock.o 00:02:39.665 CC module/event/subsystems/vmd/vmd.o 00:02:39.665 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.665 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.665 CC module/event/subsystems/keyring/keyring.o 00:02:39.666 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.666 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.666 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.927 LIB libspdk_event_scheduler.a 00:02:39.927 LIB libspdk_event_sock.a 00:02:39.927 LIB libspdk_event_keyring.a 00:02:39.927 LIB libspdk_event_vmd.a 00:02:39.927 LIB libspdk_event_vhost_blk.a 00:02:39.927 LIB libspdk_event_iobuf.a 00:02:39.927 SO libspdk_event_sock.so.5.0 00:02:39.927 SO libspdk_event_vhost_blk.so.3.0 00:02:39.927 SO libspdk_event_keyring.so.1.0 00:02:39.927 SO libspdk_event_vmd.so.6.0 00:02:39.927 SO libspdk_event_scheduler.so.4.0 00:02:39.927 SO libspdk_event_iobuf.so.3.0 00:02:39.927 SYMLINK libspdk_event_sock.so 00:02:39.927 SYMLINK libspdk_event_vhost_blk.so 00:02:39.927 SYMLINK libspdk_event_keyring.so 00:02:39.927 SYMLINK libspdk_event_scheduler.so 00:02:39.927 SYMLINK libspdk_event_vmd.so 00:02:39.927 SYMLINK libspdk_event_iobuf.so 00:02:40.499 CC module/event/subsystems/accel/accel.o 00:02:40.499 LIB libspdk_event_accel.a 00:02:40.499 SO libspdk_event_accel.so.6.0 00:02:40.499 SYMLINK libspdk_event_accel.so 00:02:41.072 CC module/event/subsystems/bdev/bdev.o 00:02:41.072 LIB libspdk_event_bdev.a 00:02:41.072 SO libspdk_event_bdev.so.6.0 00:02:41.332 SYMLINK libspdk_event_bdev.so 00:02:41.593 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.593 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.593 CC module/event/subsystems/scsi/scsi.o 00:02:41.593 CC module/event/subsystems/nbd/nbd.o 00:02:41.593 CC module/event/subsystems/ublk/ublk.o 00:02:41.854 LIB libspdk_event_ublk.a 00:02:41.854 LIB libspdk_event_nbd.a 00:02:41.854 LIB libspdk_event_scsi.a 00:02:41.854 SO libspdk_event_ublk.so.3.0 00:02:41.854 SO libspdk_event_nbd.so.6.0 00:02:41.854 SO libspdk_event_scsi.so.6.0 00:02:41.854 LIB libspdk_event_nvmf.a 00:02:41.854 SYMLINK libspdk_event_ublk.so 00:02:41.854 SYMLINK libspdk_event_nbd.so 00:02:41.854 SO libspdk_event_nvmf.so.6.0 00:02:41.854 SYMLINK libspdk_event_scsi.so 00:02:41.854 SYMLINK libspdk_event_nvmf.so 00:02:42.115 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.115 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.376 LIB libspdk_event_vhost_scsi.a 00:02:42.376 LIB libspdk_event_iscsi.a 00:02:42.376 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.376 SO libspdk_event_iscsi.so.6.0 00:02:42.638 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.638 SYMLINK libspdk_event_iscsi.so 00:02:42.638 SO libspdk.so.6.0 00:02:42.638 SYMLINK libspdk.so 00:02:43.211 CC app/trace_record/trace_record.o 00:02:43.211 CC test/rpc_client/rpc_client_test.o 00:02:43.211 CXX app/trace/trace.o 00:02:43.211 TEST_HEADER include/spdk/accel.h 00:02:43.211 TEST_HEADER include/spdk/accel_module.h 00:02:43.211 CC app/spdk_nvme_discover/discovery_aer.o 00:02:43.211 TEST_HEADER include/spdk/assert.h 00:02:43.211 TEST_HEADER include/spdk/barrier.h 00:02:43.211 TEST_HEADER include/spdk/base64.h 00:02:43.211 TEST_HEADER include/spdk/bdev.h 00:02:43.211 TEST_HEADER include/spdk/bdev_module.h 00:02:43.211 TEST_HEADER include/spdk/bdev_zone.h 00:02:43.211 TEST_HEADER include/spdk/bit_array.h 00:02:43.211 TEST_HEADER include/spdk/bit_pool.h 00:02:43.211 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:43.211 TEST_HEADER include/spdk/blob_bdev.h 00:02:43.211 CC app/spdk_top/spdk_top.o 00:02:43.211 TEST_HEADER include/spdk/blobfs.h 00:02:43.211 CC app/spdk_nvme_perf/perf.o 00:02:43.211 TEST_HEADER include/spdk/blob.h 00:02:43.211 TEST_HEADER include/spdk/conf.h 00:02:43.211 CC app/spdk_lspci/spdk_lspci.o 00:02:43.211 CC app/spdk_nvme_identify/identify.o 00:02:43.211 TEST_HEADER include/spdk/config.h 00:02:43.211 TEST_HEADER include/spdk/cpuset.h 00:02:43.211 TEST_HEADER include/spdk/crc16.h 00:02:43.211 TEST_HEADER include/spdk/crc32.h 00:02:43.211 TEST_HEADER include/spdk/dif.h 00:02:43.211 TEST_HEADER include/spdk/crc64.h 00:02:43.211 TEST_HEADER include/spdk/endian.h 00:02:43.211 TEST_HEADER include/spdk/dma.h 00:02:43.211 TEST_HEADER include/spdk/env.h 00:02:43.211 TEST_HEADER include/spdk/env_dpdk.h 00:02:43.211 TEST_HEADER include/spdk/event.h 00:02:43.211 TEST_HEADER include/spdk/fd_group.h 00:02:43.211 TEST_HEADER include/spdk/file.h 00:02:43.211 TEST_HEADER include/spdk/fd.h 00:02:43.211 TEST_HEADER include/spdk/ftl.h 00:02:43.211 TEST_HEADER include/spdk/gpt_spec.h 00:02:43.211 TEST_HEADER include/spdk/hexlify.h 00:02:43.211 TEST_HEADER include/spdk/histogram_data.h 00:02:43.211 TEST_HEADER include/spdk/idxd.h 00:02:43.211 TEST_HEADER include/spdk/idxd_spec.h 00:02:43.211 TEST_HEADER include/spdk/init.h 00:02:43.211 TEST_HEADER include/spdk/ioat.h 00:02:43.211 TEST_HEADER include/spdk/ioat_spec.h 00:02:43.211 TEST_HEADER include/spdk/iscsi_spec.h 00:02:43.211 TEST_HEADER include/spdk/json.h 00:02:43.211 TEST_HEADER include/spdk/jsonrpc.h 00:02:43.211 TEST_HEADER include/spdk/keyring.h 00:02:43.211 CC app/spdk_dd/spdk_dd.o 00:02:43.211 TEST_HEADER include/spdk/keyring_module.h 00:02:43.211 TEST_HEADER include/spdk/likely.h 00:02:43.211 TEST_HEADER include/spdk/log.h 00:02:43.211 CC app/iscsi_tgt/iscsi_tgt.o 00:02:43.211 TEST_HEADER include/spdk/lvol.h 00:02:43.211 TEST_HEADER include/spdk/mmio.h 00:02:43.211 TEST_HEADER include/spdk/memory.h 00:02:43.211 TEST_HEADER include/spdk/nbd.h 00:02:43.211 TEST_HEADER include/spdk/net.h 00:02:43.211 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:43.211 TEST_HEADER include/spdk/notify.h 00:02:43.211 CC app/nvmf_tgt/nvmf_main.o 00:02:43.211 TEST_HEADER include/spdk/nvme.h 00:02:43.211 TEST_HEADER include/spdk/nvme_intel.h 00:02:43.211 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:43.211 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:43.211 TEST_HEADER include/spdk/nvme_spec.h 00:02:43.211 TEST_HEADER include/spdk/nvme_zns.h 00:02:43.211 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:43.211 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:43.211 TEST_HEADER include/spdk/nvmf.h 00:02:43.211 TEST_HEADER include/spdk/nvmf_spec.h 00:02:43.211 TEST_HEADER include/spdk/nvmf_transport.h 00:02:43.211 TEST_HEADER include/spdk/opal.h 00:02:43.211 TEST_HEADER include/spdk/opal_spec.h 00:02:43.211 TEST_HEADER include/spdk/pci_ids.h 00:02:43.211 TEST_HEADER include/spdk/pipe.h 00:02:43.211 TEST_HEADER include/spdk/reduce.h 00:02:43.211 TEST_HEADER include/spdk/queue.h 00:02:43.211 TEST_HEADER include/spdk/rpc.h 00:02:43.211 TEST_HEADER include/spdk/scsi.h 00:02:43.211 TEST_HEADER include/spdk/scheduler.h 00:02:43.211 TEST_HEADER include/spdk/scsi_spec.h 00:02:43.211 TEST_HEADER include/spdk/sock.h 00:02:43.211 CC app/spdk_tgt/spdk_tgt.o 00:02:43.211 TEST_HEADER include/spdk/string.h 00:02:43.211 TEST_HEADER include/spdk/stdinc.h 00:02:43.211 TEST_HEADER include/spdk/thread.h 00:02:43.211 TEST_HEADER include/spdk/trace.h 00:02:43.211 TEST_HEADER include/spdk/trace_parser.h 00:02:43.211 TEST_HEADER include/spdk/ublk.h 00:02:43.211 TEST_HEADER include/spdk/tree.h 00:02:43.211 TEST_HEADER include/spdk/util.h 00:02:43.211 TEST_HEADER include/spdk/uuid.h 00:02:43.211 TEST_HEADER include/spdk/version.h 00:02:43.211 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:43.211 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:43.211 TEST_HEADER include/spdk/vhost.h 00:02:43.211 TEST_HEADER include/spdk/vmd.h 00:02:43.211 TEST_HEADER include/spdk/xor.h 00:02:43.211 TEST_HEADER include/spdk/zipf.h 00:02:43.211 CXX test/cpp_headers/accel.o 00:02:43.211 CXX test/cpp_headers/accel_module.o 00:02:43.212 CXX test/cpp_headers/assert.o 00:02:43.212 CXX test/cpp_headers/barrier.o 00:02:43.212 CXX test/cpp_headers/base64.o 00:02:43.212 CXX test/cpp_headers/bdev.o 00:02:43.212 CXX test/cpp_headers/bdev_module.o 00:02:43.212 CXX test/cpp_headers/bdev_zone.o 00:02:43.212 CXX test/cpp_headers/bit_array.o 00:02:43.212 CXX test/cpp_headers/bit_pool.o 00:02:43.212 CXX test/cpp_headers/blob_bdev.o 00:02:43.212 CXX test/cpp_headers/blobfs_bdev.o 00:02:43.212 CXX test/cpp_headers/blobfs.o 00:02:43.212 CXX test/cpp_headers/blob.o 00:02:43.212 CXX test/cpp_headers/conf.o 00:02:43.212 CXX test/cpp_headers/config.o 00:02:43.212 CXX test/cpp_headers/cpuset.o 00:02:43.212 CXX test/cpp_headers/crc16.o 00:02:43.212 CXX test/cpp_headers/crc32.o 00:02:43.212 CXX test/cpp_headers/crc64.o 00:02:43.212 CXX test/cpp_headers/dif.o 00:02:43.212 CXX test/cpp_headers/dma.o 00:02:43.212 CXX test/cpp_headers/endian.o 00:02:43.212 CXX test/cpp_headers/env_dpdk.o 00:02:43.212 CXX test/cpp_headers/env.o 00:02:43.212 CXX test/cpp_headers/event.o 00:02:43.212 CXX test/cpp_headers/fd_group.o 00:02:43.212 CXX test/cpp_headers/fd.o 00:02:43.212 CXX test/cpp_headers/file.o 00:02:43.212 CXX test/cpp_headers/ftl.o 00:02:43.212 CXX test/cpp_headers/gpt_spec.o 00:02:43.212 CXX test/cpp_headers/hexlify.o 00:02:43.212 CXX test/cpp_headers/histogram_data.o 00:02:43.212 CXX test/cpp_headers/init.o 00:02:43.212 CXX test/cpp_headers/idxd.o 00:02:43.212 CXX test/cpp_headers/idxd_spec.o 00:02:43.212 CXX test/cpp_headers/ioat.o 00:02:43.212 CXX test/cpp_headers/ioat_spec.o 00:02:43.212 CXX test/cpp_headers/json.o 00:02:43.212 CXX test/cpp_headers/iscsi_spec.o 00:02:43.212 CXX test/cpp_headers/jsonrpc.o 00:02:43.212 CXX test/cpp_headers/keyring.o 00:02:43.212 CXX test/cpp_headers/keyring_module.o 00:02:43.212 CXX test/cpp_headers/lvol.o 00:02:43.212 CXX test/cpp_headers/log.o 00:02:43.212 CXX test/cpp_headers/memory.o 00:02:43.212 CXX test/cpp_headers/likely.o 00:02:43.212 CXX test/cpp_headers/mmio.o 00:02:43.212 CXX test/cpp_headers/net.o 00:02:43.212 CXX test/cpp_headers/nbd.o 00:02:43.212 CXX test/cpp_headers/nvme_intel.o 00:02:43.212 CXX test/cpp_headers/nvme.o 00:02:43.212 CXX test/cpp_headers/notify.o 00:02:43.212 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.212 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:43.212 CXX test/cpp_headers/nvme_spec.o 00:02:43.212 CXX test/cpp_headers/nvme_zns.o 00:02:43.212 CXX test/cpp_headers/nvmf_cmd.o 00:02:43.474 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:43.474 CXX test/cpp_headers/nvmf.o 00:02:43.474 CXX test/cpp_headers/nvmf_spec.o 00:02:43.474 CXX test/cpp_headers/nvmf_transport.o 00:02:43.474 CXX test/cpp_headers/opal.o 00:02:43.474 CXX test/cpp_headers/opal_spec.o 00:02:43.474 CXX test/cpp_headers/pci_ids.o 00:02:43.474 CXX test/cpp_headers/rpc.o 00:02:43.474 CXX test/cpp_headers/pipe.o 00:02:43.474 CC test/thread/poller_perf/poller_perf.o 00:02:43.474 CXX test/cpp_headers/queue.o 00:02:43.474 CXX test/cpp_headers/reduce.o 00:02:43.474 CXX test/cpp_headers/scheduler.o 00:02:43.474 CXX test/cpp_headers/scsi.o 00:02:43.474 CXX test/cpp_headers/scsi_spec.o 00:02:43.474 CXX test/cpp_headers/sock.o 00:02:43.474 CXX test/cpp_headers/string.o 00:02:43.474 CXX test/cpp_headers/stdinc.o 00:02:43.474 CXX test/cpp_headers/trace.o 00:02:43.474 CXX test/cpp_headers/thread.o 00:02:43.474 CXX test/cpp_headers/tree.o 00:02:43.474 CXX test/cpp_headers/trace_parser.o 00:02:43.474 CXX test/cpp_headers/ublk.o 00:02:43.474 CXX test/cpp_headers/uuid.o 00:02:43.474 CXX test/cpp_headers/version.o 00:02:43.474 CXX test/cpp_headers/vfio_user_spec.o 00:02:43.474 CXX test/cpp_headers/vfio_user_pci.o 00:02:43.474 CXX test/cpp_headers/vhost.o 00:02:43.474 CXX test/cpp_headers/util.o 00:02:43.474 LINK spdk_lspci 00:02:43.474 CXX test/cpp_headers/vmd.o 00:02:43.474 CC test/env/vtophys/vtophys.o 00:02:43.474 CXX test/cpp_headers/xor.o 00:02:43.474 CXX test/cpp_headers/zipf.o 00:02:43.474 CC test/app/stub/stub.o 00:02:43.474 CC examples/util/zipf/zipf.o 00:02:43.474 CC examples/ioat/verify/verify.o 00:02:43.474 CC test/env/memory/memory_ut.o 00:02:43.474 CC test/app/histogram_perf/histogram_perf.o 00:02:43.474 CC test/env/pci/pci_ut.o 00:02:43.474 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:43.474 CC test/app/jsoncat/jsoncat.o 00:02:43.474 CC examples/ioat/perf/perf.o 00:02:43.474 CC app/fio/nvme/fio_plugin.o 00:02:43.474 CC test/dma/test_dma/test_dma.o 00:02:43.474 LINK rpc_client_test 00:02:43.474 CC test/app/bdev_svc/bdev_svc.o 00:02:43.474 CC app/fio/bdev/fio_plugin.o 00:02:43.474 LINK spdk_nvme_discover 00:02:43.474 LINK spdk_trace_record 00:02:43.735 LINK iscsi_tgt 00:02:43.735 LINK nvmf_tgt 00:02:43.735 LINK interrupt_tgt 00:02:43.735 CC test/env/mem_callbacks/mem_callbacks.o 00:02:43.735 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:43.735 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:43.735 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:43.735 LINK jsoncat 00:02:43.735 LINK spdk_tgt 00:02:43.735 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:43.996 LINK ioat_perf 00:02:43.996 LINK zipf 00:02:43.996 LINK histogram_perf 00:02:43.996 LINK poller_perf 00:02:43.996 LINK vtophys 00:02:43.996 LINK spdk_dd 00:02:43.996 LINK env_dpdk_post_init 00:02:43.996 LINK stub 00:02:43.996 LINK bdev_svc 00:02:44.256 LINK spdk_trace 00:02:44.256 LINK verify 00:02:44.256 LINK test_dma 00:02:44.256 LINK vhost_fuzz 00:02:44.256 LINK nvme_fuzz 00:02:44.256 LINK pci_ut 00:02:44.256 LINK spdk_bdev 00:02:44.256 CC examples/vmd/lsvmd/lsvmd.o 00:02:44.530 CC examples/vmd/led/led.o 00:02:44.530 CC examples/idxd/perf/perf.o 00:02:44.530 CC examples/thread/thread/thread_ex.o 00:02:44.530 CC examples/sock/hello_world/hello_sock.o 00:02:44.530 CC test/event/reactor/reactor.o 00:02:44.530 LINK mem_callbacks 00:02:44.530 CC test/event/reactor_perf/reactor_perf.o 00:02:44.530 CC test/event/event_perf/event_perf.o 00:02:44.530 CC test/event/app_repeat/app_repeat.o 00:02:44.530 CC app/vhost/vhost.o 00:02:44.530 LINK lsvmd 00:02:44.530 LINK spdk_nvme 00:02:44.530 LINK spdk_nvme_identify 00:02:44.530 CC test/event/scheduler/scheduler.o 00:02:44.530 LINK spdk_top 00:02:44.530 LINK led 00:02:44.530 LINK spdk_nvme_perf 00:02:44.530 LINK reactor 00:02:44.530 LINK reactor_perf 00:02:44.530 LINK event_perf 00:02:44.796 LINK app_repeat 00:02:44.796 LINK thread 00:02:44.796 CC test/nvme/overhead/overhead.o 00:02:44.796 CC test/nvme/e2edp/nvme_dp.o 00:02:44.796 CC test/nvme/reset/reset.o 00:02:44.796 CC test/nvme/startup/startup.o 00:02:44.796 LINK vhost 00:02:44.796 CC test/nvme/reserve/reserve.o 00:02:44.796 CC test/nvme/boot_partition/boot_partition.o 00:02:44.796 CC test/nvme/cuse/cuse.o 00:02:44.796 CC test/nvme/fused_ordering/fused_ordering.o 00:02:44.796 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:44.796 CC test/nvme/compliance/nvme_compliance.o 00:02:44.796 CC test/nvme/aer/aer.o 00:02:44.796 CC test/nvme/sgl/sgl.o 00:02:44.796 CC test/nvme/err_injection/err_injection.o 00:02:44.796 CC test/nvme/connect_stress/connect_stress.o 00:02:44.796 CC test/nvme/fdp/fdp.o 00:02:44.796 LINK hello_sock 00:02:44.796 CC test/nvme/simple_copy/simple_copy.o 00:02:44.796 CC test/blobfs/mkfs/mkfs.o 00:02:44.796 CC test/accel/dif/dif.o 00:02:44.796 LINK idxd_perf 00:02:44.796 LINK scheduler 00:02:44.796 CC test/lvol/esnap/esnap.o 00:02:44.796 LINK boot_partition 00:02:44.796 LINK startup 00:02:45.056 LINK connect_stress 00:02:45.056 LINK err_injection 00:02:45.056 LINK doorbell_aers 00:02:45.056 LINK fused_ordering 00:02:45.056 LINK mkfs 00:02:45.056 LINK reserve 00:02:45.056 LINK reset 00:02:45.056 LINK simple_copy 00:02:45.056 LINK overhead 00:02:45.056 LINK memory_ut 00:02:45.056 LINK nvme_dp 00:02:45.056 LINK sgl 00:02:45.056 LINK aer 00:02:45.056 LINK fdp 00:02:45.056 LINK nvme_compliance 00:02:45.317 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:45.317 CC examples/nvme/hello_world/hello_world.o 00:02:45.317 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:45.317 CC examples/nvme/arbitration/arbitration.o 00:02:45.317 CC examples/nvme/hotplug/hotplug.o 00:02:45.317 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:45.317 CC examples/nvme/reconnect/reconnect.o 00:02:45.317 CC examples/accel/perf/accel_perf.o 00:02:45.317 CC examples/nvme/abort/abort.o 00:02:45.317 LINK dif 00:02:45.317 CC examples/blob/hello_world/hello_blob.o 00:02:45.317 CC examples/blob/cli/blobcli.o 00:02:45.317 LINK pmr_persistence 00:02:45.317 LINK cmb_copy 00:02:45.578 LINK hotplug 00:02:45.578 LINK hello_world 00:02:45.578 LINK hello_blob 00:02:45.578 LINK arbitration 00:02:45.578 LINK reconnect 00:02:45.578 LINK abort 00:02:45.840 LINK iscsi_fuzz 00:02:45.840 LINK accel_perf 00:02:45.840 LINK nvme_manage 00:02:45.840 LINK blobcli 00:02:45.840 CC test/bdev/bdevio/bdevio.o 00:02:46.102 LINK cuse 00:02:46.363 LINK bdevio 00:02:46.363 CC examples/bdev/hello_world/hello_bdev.o 00:02:46.363 CC examples/bdev/bdevperf/bdevperf.o 00:02:46.624 LINK hello_bdev 00:02:47.197 LINK bdevperf 00:02:47.773 CC examples/nvmf/nvmf/nvmf.o 00:02:48.034 LINK nvmf 00:02:49.999 LINK esnap 00:02:50.259 00:02:50.259 real 0m55.127s 00:02:50.259 user 6m59.249s 00:02:50.259 sys 4m3.139s 00:02:50.259 19:07:09 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:50.259 19:07:09 make -- common/autotest_common.sh@10 -- $ set +x 00:02:50.259 ************************************ 00:02:50.259 END TEST make 00:02:50.259 ************************************ 00:02:50.259 19:07:09 -- common/autotest_common.sh@1142 -- $ return 0 00:02:50.259 19:07:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:50.259 19:07:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:50.259 19:07:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:50.259 19:07:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.259 19:07:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:50.259 19:07:09 -- pm/common@44 -- $ pid=2539509 00:02:50.259 19:07:09 -- pm/common@50 -- $ kill -TERM 2539509 00:02:50.259 19:07:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.259 19:07:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:50.259 19:07:09 -- pm/common@44 -- $ pid=2539510 00:02:50.259 19:07:09 -- pm/common@50 -- $ kill -TERM 2539510 00:02:50.259 19:07:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.259 19:07:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:50.259 19:07:09 -- pm/common@44 -- $ pid=2539512 00:02:50.259 19:07:09 -- pm/common@50 -- $ kill -TERM 2539512 00:02:50.259 19:07:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.259 19:07:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:50.259 19:07:09 -- pm/common@44 -- $ pid=2539538 00:02:50.259 19:07:09 -- pm/common@50 -- $ sudo -E kill -TERM 2539538 00:02:50.523 19:07:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:50.523 19:07:09 -- nvmf/common.sh@7 -- # uname -s 00:02:50.523 19:07:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.523 19:07:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.523 19:07:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.523 19:07:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.523 19:07:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.523 19:07:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.523 19:07:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.523 19:07:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.523 19:07:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.523 19:07:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.523 19:07:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:50.523 19:07:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:50.523 19:07:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.523 19:07:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.523 19:07:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:50.523 19:07:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:50.523 19:07:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:50.523 19:07:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.523 19:07:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.523 19:07:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.523 19:07:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.523 19:07:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.523 19:07:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.523 19:07:09 -- paths/export.sh@5 -- # export PATH 00:02:50.523 19:07:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.523 19:07:09 -- nvmf/common.sh@47 -- # : 0 00:02:50.523 19:07:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:50.523 19:07:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:50.523 19:07:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:50.523 19:07:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.523 19:07:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.523 19:07:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:50.523 19:07:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:50.523 19:07:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:50.523 19:07:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.523 19:07:09 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.523 19:07:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.523 19:07:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.523 19:07:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.523 19:07:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.523 19:07:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.523 19:07:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.523 19:07:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.523 19:07:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.523 19:07:09 -- spdk/autotest.sh@48 -- # udevadm_pid=2603454 00:02:50.523 19:07:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:50.523 19:07:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.523 19:07:09 -- pm/common@17 -- # local monitor 00:02:50.523 19:07:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.523 19:07:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.523 19:07:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.523 19:07:09 -- pm/common@21 -- # date +%s 00:02:50.523 19:07:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.523 19:07:09 -- pm/common@25 -- # sleep 1 00:02:50.523 19:07:09 -- pm/common@21 -- # date +%s 00:02:50.523 19:07:09 -- pm/common@21 -- # date +%s 00:02:50.523 19:07:09 -- pm/common@21 -- # date +%s 00:02:50.523 19:07:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721668029 00:02:50.523 19:07:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721668029 00:02:50.523 19:07:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721668029 00:02:50.523 19:07:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721668029 00:02:50.523 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721668029_collect-vmstat.pm.log 00:02:50.523 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721668029_collect-cpu-load.pm.log 00:02:50.523 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721668029_collect-cpu-temp.pm.log 00:02:50.523 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721668029_collect-bmc-pm.bmc.pm.log 00:02:51.468 19:07:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:51.468 19:07:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:51.468 19:07:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:51.468 19:07:10 -- common/autotest_common.sh@10 -- # set +x 00:02:51.468 19:07:10 -- spdk/autotest.sh@59 -- # create_test_list 00:02:51.468 19:07:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:51.468 19:07:10 -- common/autotest_common.sh@10 -- # set +x 00:02:51.731 19:07:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:51.731 19:07:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.731 19:07:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.731 19:07:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:51.731 19:07:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.731 19:07:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:51.731 19:07:10 -- common/autotest_common.sh@1455 -- # uname 00:02:51.731 19:07:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:51.731 19:07:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:51.731 19:07:10 -- common/autotest_common.sh@1475 -- # uname 00:02:51.731 19:07:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:51.731 19:07:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:51.731 19:07:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:51.731 19:07:10 -- spdk/autotest.sh@72 -- # hash lcov 00:02:51.731 19:07:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:51.731 19:07:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:51.731 --rc lcov_branch_coverage=1 00:02:51.731 --rc lcov_function_coverage=1 00:02:51.731 --rc genhtml_branch_coverage=1 00:02:51.731 --rc genhtml_function_coverage=1 00:02:51.731 --rc genhtml_legend=1 00:02:51.731 --rc geninfo_all_blocks=1 00:02:51.731 ' 00:02:51.731 19:07:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:51.731 --rc lcov_branch_coverage=1 00:02:51.731 --rc lcov_function_coverage=1 00:02:51.731 --rc genhtml_branch_coverage=1 00:02:51.731 --rc genhtml_function_coverage=1 00:02:51.731 --rc genhtml_legend=1 00:02:51.731 --rc geninfo_all_blocks=1 00:02:51.731 ' 00:02:51.731 19:07:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:51.731 --rc lcov_branch_coverage=1 00:02:51.731 --rc lcov_function_coverage=1 00:02:51.731 --rc genhtml_branch_coverage=1 00:02:51.731 --rc genhtml_function_coverage=1 00:02:51.731 --rc genhtml_legend=1 00:02:51.731 --rc geninfo_all_blocks=1 00:02:51.731 --no-external' 00:02:51.731 19:07:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:51.731 --rc lcov_branch_coverage=1 00:02:51.731 --rc lcov_function_coverage=1 00:02:51.731 --rc genhtml_branch_coverage=1 00:02:51.731 --rc genhtml_function_coverage=1 00:02:51.731 --rc genhtml_legend=1 00:02:51.731 --rc geninfo_all_blocks=1 00:02:51.731 --no-external' 00:02:51.731 19:07:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:51.731 lcov: LCOV version 1.14 00:02:51.731 19:07:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:01.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:01.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:02.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:02.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:02.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:02.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:14.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:14.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:21.090 19:07:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:21.090 19:07:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:21.090 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:03:21.090 19:07:39 -- spdk/autotest.sh@91 -- # rm -f 00:03:21.090 19:07:39 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.642 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:23.642 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:23.642 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:24.214 19:07:42 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:24.214 19:07:42 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:24.214 19:07:42 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:24.214 19:07:42 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:24.214 19:07:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:24.214 19:07:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:24.214 19:07:42 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:24.214 19:07:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.214 19:07:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:24.214 19:07:42 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:24.214 19:07:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:24.214 19:07:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:24.214 19:07:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:24.214 19:07:42 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:24.214 19:07:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:24.214 No valid GPT data, bailing 00:03:24.214 19:07:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:24.214 19:07:42 -- scripts/common.sh@391 -- # pt= 00:03:24.214 19:07:42 -- scripts/common.sh@392 -- # return 1 00:03:24.214 19:07:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:24.214 1+0 records in 00:03:24.214 1+0 records out 00:03:24.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469366 s, 223 MB/s 00:03:24.214 19:07:42 -- spdk/autotest.sh@118 -- # sync 00:03:24.214 19:07:42 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:24.214 19:07:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:24.214 19:07:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:32.435 19:07:50 -- spdk/autotest.sh@124 -- # uname -s 00:03:32.435 19:07:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:32.435 19:07:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:32.435 19:07:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.435 19:07:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.435 19:07:50 -- common/autotest_common.sh@10 -- # set +x 00:03:32.435 ************************************ 00:03:32.435 START TEST setup.sh 00:03:32.435 ************************************ 00:03:32.435 19:07:50 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:32.435 * Looking for test storage... 00:03:32.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:32.435 19:07:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:32.435 19:07:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:32.435 19:07:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:32.435 19:07:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.435 19:07:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.435 19:07:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:32.435 ************************************ 00:03:32.435 START TEST acl 00:03:32.435 ************************************ 00:03:32.435 19:07:50 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:32.435 * Looking for test storage... 00:03:32.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:32.435 19:07:51 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:32.435 19:07:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:32.435 19:07:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:32.435 19:07:51 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:32.435 19:07:51 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:32.435 19:07:51 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:32.435 19:07:51 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:32.435 19:07:51 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:32.435 19:07:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:32.435 19:07:51 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:32.435 19:07:51 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:32.435 19:07:51 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:32.435 19:07:51 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:32.435 19:07:51 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:32.435 19:07:51 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.435 19:07:51 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.642 19:07:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:36.642 19:07:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:36.642 19:07:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.642 19:07:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:36.642 19:07:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.642 19:07:54 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:39.946 Hugepages 00:03:39.946 node hugesize free / total 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.946 00:03:39.946 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:39.946 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:39.947 19:07:58 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:39.947 19:07:58 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.947 19:07:58 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.947 19:07:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.947 ************************************ 00:03:39.947 START TEST denied 00:03:39.947 ************************************ 00:03:39.947 19:07:58 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:39.947 19:07:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:39.947 19:07:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:39.947 19:07:58 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:39.947 19:07:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.947 19:07:58 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.154 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:44.154 19:08:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:44.154 19:08:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:44.154 19:08:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:44.154 19:08:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:44.154 19:08:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:44.154 19:08:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:44.154 19:08:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:44.154 19:08:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:44.154 19:08:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.155 19:08:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.361 00:03:48.361 real 0m8.399s 00:03:48.361 user 0m2.811s 00:03:48.361 sys 0m4.822s 00:03:48.361 19:08:06 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.361 19:08:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:48.361 ************************************ 00:03:48.361 END TEST denied 00:03:48.361 ************************************ 00:03:48.361 19:08:06 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:48.361 19:08:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:48.361 19:08:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.361 19:08:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.361 19:08:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.361 ************************************ 00:03:48.361 START TEST allowed 00:03:48.361 ************************************ 00:03:48.361 19:08:06 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:48.361 19:08:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:48.361 19:08:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:48.361 19:08:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:48.361 19:08:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.361 19:08:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.668 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:53.669 19:08:12 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:53.669 19:08:12 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:53.669 19:08:12 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:53.669 19:08:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.669 19:08:12 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.876 00:03:57.876 real 0m9.423s 00:03:57.876 user 0m2.806s 00:03:57.876 sys 0m4.879s 00:03:57.876 19:08:16 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.876 19:08:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:57.876 ************************************ 00:03:57.876 END TEST allowed 00:03:57.876 ************************************ 00:03:57.876 19:08:16 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:57.876 00:03:57.876 real 0m25.532s 00:03:57.876 user 0m8.555s 00:03:57.876 sys 0m14.662s 00:03:57.876 19:08:16 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.876 19:08:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:57.876 ************************************ 00:03:57.876 END TEST acl 00:03:57.876 ************************************ 00:03:57.876 19:08:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:57.876 19:08:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:57.876 19:08:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.876 19:08:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.876 19:08:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.876 ************************************ 00:03:57.876 START TEST hugepages 00:03:57.876 ************************************ 00:03:57.876 19:08:16 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:57.876 * Looking for test storage... 00:03:57.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103044008 kB' 'MemAvailable: 106369468 kB' 'Buffers: 2704 kB' 'Cached: 14641148 kB' 'SwapCached: 0 kB' 'Active: 11594448 kB' 'Inactive: 3518544 kB' 'Active(anon): 11115224 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472552 kB' 'Mapped: 222988 kB' 'Shmem: 10646084 kB' 'KReclaimable: 304048 kB' 'Slab: 1126944 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 822896 kB' 'KernelStack: 27216 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460872 kB' 'Committed_AS: 12640064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.876 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.877 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:57.878 19:08:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:57.878 19:08:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.878 19:08:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.878 19:08:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.878 ************************************ 00:03:57.878 START TEST default_setup 00:03:57.878 ************************************ 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.878 19:08:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.194 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:01.194 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:01.195 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:01.195 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105238816 kB' 'MemAvailable: 108564276 kB' 'Buffers: 2704 kB' 'Cached: 14641272 kB' 'SwapCached: 0 kB' 'Active: 11611016 kB' 'Inactive: 3518544 kB' 'Active(anon): 11131792 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489012 kB' 'Mapped: 223180 kB' 'Shmem: 10646208 kB' 'KReclaimable: 304048 kB' 'Slab: 1124524 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820476 kB' 'KernelStack: 27216 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12653720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235188 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.195 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105240748 kB' 'MemAvailable: 108566208 kB' 'Buffers: 2704 kB' 'Cached: 14641276 kB' 'SwapCached: 0 kB' 'Active: 11610568 kB' 'Inactive: 3518544 kB' 'Active(anon): 11131344 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488544 kB' 'Mapped: 223172 kB' 'Shmem: 10646212 kB' 'KReclaimable: 304048 kB' 'Slab: 1124588 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820540 kB' 'KernelStack: 27184 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12653740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235172 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.196 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.197 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105244044 kB' 'MemAvailable: 108569504 kB' 'Buffers: 2704 kB' 'Cached: 14641292 kB' 'SwapCached: 0 kB' 'Active: 11611396 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132172 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489580 kB' 'Mapped: 223172 kB' 'Shmem: 10646228 kB' 'KReclaimable: 304048 kB' 'Slab: 1124588 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820540 kB' 'KernelStack: 27200 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12653760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.198 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.199 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.200 nr_hugepages=1024 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.200 resv_hugepages=0 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.200 surplus_hugepages=0 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.200 anon_hugepages=0 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.200 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105244524 kB' 'MemAvailable: 108569984 kB' 'Buffers: 2704 kB' 'Cached: 14641316 kB' 'SwapCached: 0 kB' 'Active: 11611324 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132100 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489488 kB' 'Mapped: 223172 kB' 'Shmem: 10646252 kB' 'KReclaimable: 304048 kB' 'Slab: 1124588 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820540 kB' 'KernelStack: 27184 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12653784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.201 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58183108 kB' 'MemUsed: 7475900 kB' 'SwapCached: 0 kB' 'Active: 2696564 kB' 'Inactive: 225208 kB' 'Active(anon): 2457140 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 225208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2739088 kB' 'Mapped: 89596 kB' 'AnonPages: 185576 kB' 'Shmem: 2274456 kB' 'KernelStack: 14616 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124384 kB' 'Slab: 591228 kB' 'SReclaimable: 124384 kB' 'SUnreclaim: 466844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.202 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.203 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.204 node0=1024 expecting 1024 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.204 00:04:01.204 real 0m3.423s 00:04:01.204 user 0m1.169s 00:04:01.204 sys 0m2.198s 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.204 19:08:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:01.204 ************************************ 00:04:01.204 END TEST default_setup 00:04:01.204 ************************************ 00:04:01.465 19:08:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.465 19:08:20 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:01.465 19:08:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.465 19:08:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.465 19:08:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.465 ************************************ 00:04:01.465 START TEST per_node_1G_alloc 00:04:01.465 ************************************ 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.465 19:08:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.842 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:04.842 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.842 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105245404 kB' 'MemAvailable: 108570864 kB' 'Buffers: 2704 kB' 'Cached: 14641432 kB' 'SwapCached: 0 kB' 'Active: 11610808 kB' 'Inactive: 3518544 kB' 'Active(anon): 11131584 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488004 kB' 'Mapped: 222184 kB' 'Shmem: 10646368 kB' 'KReclaimable: 304048 kB' 'Slab: 1124708 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820660 kB' 'KernelStack: 27168 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12647020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.109 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.110 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105246340 kB' 'MemAvailable: 108571800 kB' 'Buffers: 2704 kB' 'Cached: 14641436 kB' 'SwapCached: 0 kB' 'Active: 11610136 kB' 'Inactive: 3518544 kB' 'Active(anon): 11130912 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487344 kB' 'Mapped: 222176 kB' 'Shmem: 10646372 kB' 'KReclaimable: 304048 kB' 'Slab: 1124680 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820632 kB' 'KernelStack: 27120 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12647040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.111 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.112 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105247356 kB' 'MemAvailable: 108572816 kB' 'Buffers: 2704 kB' 'Cached: 14641452 kB' 'SwapCached: 0 kB' 'Active: 11609664 kB' 'Inactive: 3518544 kB' 'Active(anon): 11130440 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487328 kB' 'Mapped: 222100 kB' 'Shmem: 10646388 kB' 'KReclaimable: 304048 kB' 'Slab: 1124588 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820540 kB' 'KernelStack: 27152 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12647060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.113 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.114 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.115 nr_hugepages=1024 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.115 resv_hugepages=0 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.115 surplus_hugepages=0 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.115 anon_hugepages=0 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105247796 kB' 'MemAvailable: 108573256 kB' 'Buffers: 2704 kB' 'Cached: 14641476 kB' 'SwapCached: 0 kB' 'Active: 11609480 kB' 'Inactive: 3518544 kB' 'Active(anon): 11130256 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487120 kB' 'Mapped: 222284 kB' 'Shmem: 10646412 kB' 'KReclaimable: 304048 kB' 'Slab: 1124588 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820540 kB' 'KernelStack: 27136 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12648044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.115 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59231892 kB' 'MemUsed: 6427116 kB' 'SwapCached: 0 kB' 'Active: 2694864 kB' 'Inactive: 225208 kB' 'Active(anon): 2455440 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 225208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2739208 kB' 'Mapped: 88692 kB' 'AnonPages: 184092 kB' 'Shmem: 2274576 kB' 'KernelStack: 14616 kB' 'PageTables: 4960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124384 kB' 'Slab: 591308 kB' 'SReclaimable: 124384 kB' 'SUnreclaim: 466924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.118 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46008124 kB' 'MemUsed: 14671712 kB' 'SwapCached: 0 kB' 'Active: 8920380 kB' 'Inactive: 3293336 kB' 'Active(anon): 8680580 kB' 'Inactive(anon): 0 kB' 'Active(file): 239800 kB' 'Inactive(file): 3293336 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11904996 kB' 'Mapped: 134132 kB' 'AnonPages: 308876 kB' 'Shmem: 8371860 kB' 'KernelStack: 12504 kB' 'PageTables: 3400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 179664 kB' 'Slab: 533280 kB' 'SReclaimable: 179664 kB' 'SUnreclaim: 353616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.120 node0=512 expecting 512 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:05.120 node1=512 expecting 512 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.120 00:04:05.120 real 0m3.817s 00:04:05.120 user 0m1.556s 00:04:05.120 sys 0m2.313s 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.120 19:08:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.120 ************************************ 00:04:05.120 END TEST per_node_1G_alloc 00:04:05.120 ************************************ 00:04:05.382 19:08:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:05.382 19:08:24 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:05.382 19:08:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.382 19:08:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.382 19:08:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.382 ************************************ 00:04:05.382 START TEST even_2G_alloc 00:04:05.382 ************************************ 00:04:05.382 19:08:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:05.382 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.383 19:08:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.687 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:08.687 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.687 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105222624 kB' 'MemAvailable: 108548084 kB' 'Buffers: 2704 kB' 'Cached: 14641612 kB' 'SwapCached: 0 kB' 'Active: 11611604 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132380 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489144 kB' 'Mapped: 222192 kB' 'Shmem: 10646548 kB' 'KReclaimable: 304048 kB' 'Slab: 1124632 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820584 kB' 'KernelStack: 27136 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12649612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.955 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.956 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.957 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105223172 kB' 'MemAvailable: 108548632 kB' 'Buffers: 2704 kB' 'Cached: 14641616 kB' 'SwapCached: 0 kB' 'Active: 11610852 kB' 'Inactive: 3518544 kB' 'Active(anon): 11131628 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488340 kB' 'Mapped: 222176 kB' 'Shmem: 10646552 kB' 'KReclaimable: 304048 kB' 'Slab: 1124620 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820572 kB' 'KernelStack: 27136 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12650868 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.958 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.959 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105222704 kB' 'MemAvailable: 108548164 kB' 'Buffers: 2704 kB' 'Cached: 14641616 kB' 'SwapCached: 0 kB' 'Active: 11611100 kB' 'Inactive: 3518544 kB' 'Active(anon): 11131876 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488548 kB' 'Mapped: 222116 kB' 'Shmem: 10646552 kB' 'KReclaimable: 304048 kB' 'Slab: 1124656 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820608 kB' 'KernelStack: 27232 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12650888 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.960 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.961 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.962 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.963 nr_hugepages=1024 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.963 resv_hugepages=0 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.963 surplus_hugepages=0 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.963 anon_hugepages=0 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105224760 kB' 'MemAvailable: 108550220 kB' 'Buffers: 2704 kB' 'Cached: 14641656 kB' 'SwapCached: 0 kB' 'Active: 11610788 kB' 'Inactive: 3518544 kB' 'Active(anon): 11131564 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488176 kB' 'Mapped: 222116 kB' 'Shmem: 10646592 kB' 'KReclaimable: 304048 kB' 'Slab: 1124656 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820608 kB' 'KernelStack: 27232 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12649304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.963 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.964 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.965 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59222176 kB' 'MemUsed: 6436832 kB' 'SwapCached: 0 kB' 'Active: 2694976 kB' 'Inactive: 225208 kB' 'Active(anon): 2455552 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 225208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2739344 kB' 'Mapped: 88708 kB' 'AnonPages: 183916 kB' 'Shmem: 2274712 kB' 'KernelStack: 14520 kB' 'PageTables: 4652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124384 kB' 'Slab: 591616 kB' 'SReclaimable: 124384 kB' 'SUnreclaim: 467232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.966 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.967 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45998252 kB' 'MemUsed: 14681584 kB' 'SwapCached: 0 kB' 'Active: 8917720 kB' 'Inactive: 3293336 kB' 'Active(anon): 8677920 kB' 'Inactive(anon): 0 kB' 'Active(file): 239800 kB' 'Inactive(file): 3293336 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11905032 kB' 'Mapped: 133408 kB' 'AnonPages: 306156 kB' 'Shmem: 8371896 kB' 'KernelStack: 12728 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 179664 kB' 'Slab: 533040 kB' 'SReclaimable: 179664 kB' 'SUnreclaim: 353376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.968 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:08.969 node0=512 expecting 512 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:08.969 node1=512 expecting 512 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:08.969 00:04:08.969 real 0m3.797s 00:04:08.969 user 0m1.512s 00:04:08.969 sys 0m2.280s 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.969 19:08:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.969 ************************************ 00:04:08.969 END TEST even_2G_alloc 00:04:08.969 ************************************ 00:04:09.230 19:08:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:09.230 19:08:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:09.230 19:08:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.230 19:08:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.230 19:08:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.230 ************************************ 00:04:09.230 START TEST odd_alloc 00:04:09.230 ************************************ 00:04:09.230 19:08:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:09.230 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:09.230 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.231 19:08:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.586 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:12.586 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:12.586 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105223732 kB' 'MemAvailable: 108549192 kB' 'Buffers: 2704 kB' 'Cached: 14641788 kB' 'SwapCached: 0 kB' 'Active: 11612344 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133120 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489520 kB' 'Mapped: 222204 kB' 'Shmem: 10646724 kB' 'KReclaimable: 304048 kB' 'Slab: 1124976 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820928 kB' 'KernelStack: 27296 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12651104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.853 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.854 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105224020 kB' 'MemAvailable: 108549480 kB' 'Buffers: 2704 kB' 'Cached: 14641792 kB' 'SwapCached: 0 kB' 'Active: 11612120 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132896 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489396 kB' 'Mapped: 222136 kB' 'Shmem: 10646728 kB' 'KReclaimable: 304048 kB' 'Slab: 1124960 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820912 kB' 'KernelStack: 27200 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12651716 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.855 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.856 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105225464 kB' 'MemAvailable: 108550924 kB' 'Buffers: 2704 kB' 'Cached: 14641792 kB' 'SwapCached: 0 kB' 'Active: 11612516 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133292 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489816 kB' 'Mapped: 222144 kB' 'Shmem: 10646728 kB' 'KReclaimable: 304048 kB' 'Slab: 1125020 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820972 kB' 'KernelStack: 27184 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12651736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.857 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.858 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:12.859 nr_hugepages=1025 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.859 resv_hugepages=0 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.859 surplus_hugepages=0 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.859 anon_hugepages=0 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105224408 kB' 'MemAvailable: 108549868 kB' 'Buffers: 2704 kB' 'Cached: 14641844 kB' 'SwapCached: 0 kB' 'Active: 11612776 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133552 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490036 kB' 'Mapped: 222136 kB' 'Shmem: 10646780 kB' 'KReclaimable: 304048 kB' 'Slab: 1124644 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820596 kB' 'KernelStack: 27248 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12652124 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.859 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.860 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59242452 kB' 'MemUsed: 6416556 kB' 'SwapCached: 0 kB' 'Active: 2695996 kB' 'Inactive: 225208 kB' 'Active(anon): 2456572 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 225208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2739472 kB' 'Mapped: 88728 kB' 'AnonPages: 184864 kB' 'Shmem: 2274840 kB' 'KernelStack: 14616 kB' 'PageTables: 4968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124384 kB' 'Slab: 591596 kB' 'SReclaimable: 124384 kB' 'SUnreclaim: 467212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.861 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45981300 kB' 'MemUsed: 14698536 kB' 'SwapCached: 0 kB' 'Active: 8916844 kB' 'Inactive: 3293336 kB' 'Active(anon): 8677044 kB' 'Inactive(anon): 0 kB' 'Active(file): 239800 kB' 'Inactive(file): 3293336 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11905100 kB' 'Mapped: 133408 kB' 'AnonPages: 305176 kB' 'Shmem: 8371964 kB' 'KernelStack: 12552 kB' 'PageTables: 3392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 179664 kB' 'Slab: 533116 kB' 'SReclaimable: 179664 kB' 'SUnreclaim: 353452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.862 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.863 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:12.864 node0=512 expecting 513 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:12.864 node1=513 expecting 512 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:12.864 00:04:12.864 real 0m3.821s 00:04:12.864 user 0m1.569s 00:04:12.864 sys 0m2.308s 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.864 19:08:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.864 ************************************ 00:04:12.864 END TEST odd_alloc 00:04:12.864 ************************************ 00:04:13.125 19:08:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:13.125 19:08:31 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:13.125 19:08:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.125 19:08:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.125 19:08:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.125 ************************************ 00:04:13.125 START TEST custom_alloc 00:04:13.125 ************************************ 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:13.125 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.126 19:08:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.431 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:16.431 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104192908 kB' 'MemAvailable: 107518368 kB' 'Buffers: 2704 kB' 'Cached: 14641984 kB' 'SwapCached: 0 kB' 'Active: 11613972 kB' 'Inactive: 3518544 kB' 'Active(anon): 11134748 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490664 kB' 'Mapped: 222260 kB' 'Shmem: 10646920 kB' 'KReclaimable: 304048 kB' 'Slab: 1125208 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 821160 kB' 'KernelStack: 27168 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12650044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104192908 kB' 'MemAvailable: 107518368 kB' 'Buffers: 2704 kB' 'Cached: 14641984 kB' 'SwapCached: 0 kB' 'Active: 11613924 kB' 'Inactive: 3518544 kB' 'Active(anon): 11134700 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490644 kB' 'Mapped: 222232 kB' 'Shmem: 10646920 kB' 'KReclaimable: 304048 kB' 'Slab: 1125160 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 821112 kB' 'KernelStack: 27168 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12650064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.703 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104193340 kB' 'MemAvailable: 107518800 kB' 'Buffers: 2704 kB' 'Cached: 14641996 kB' 'SwapCached: 0 kB' 'Active: 11612780 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133556 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489928 kB' 'Mapped: 222156 kB' 'Shmem: 10646932 kB' 'KReclaimable: 304048 kB' 'Slab: 1125156 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 821108 kB' 'KernelStack: 27152 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12650084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.704 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.705 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:16.706 nr_hugepages=1536 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.706 resv_hugepages=0 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.706 surplus_hugepages=0 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.706 anon_hugepages=0 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104192956 kB' 'MemAvailable: 107518416 kB' 'Buffers: 2704 kB' 'Cached: 14642044 kB' 'SwapCached: 0 kB' 'Active: 11612796 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133572 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489896 kB' 'Mapped: 222156 kB' 'Shmem: 10646980 kB' 'KReclaimable: 304048 kB' 'Slab: 1125156 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 821108 kB' 'KernelStack: 27136 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12650104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.706 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.707 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59238024 kB' 'MemUsed: 6420984 kB' 'SwapCached: 0 kB' 'Active: 2696092 kB' 'Inactive: 225208 kB' 'Active(anon): 2456668 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 225208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2739636 kB' 'Mapped: 88744 kB' 'AnonPages: 184872 kB' 'Shmem: 2275004 kB' 'KernelStack: 14632 kB' 'PageTables: 4960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124384 kB' 'Slab: 591948 kB' 'SReclaimable: 124384 kB' 'SUnreclaim: 467564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.708 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 44954668 kB' 'MemUsed: 15725168 kB' 'SwapCached: 0 kB' 'Active: 8916728 kB' 'Inactive: 3293336 kB' 'Active(anon): 8676928 kB' 'Inactive(anon): 0 kB' 'Active(file): 239800 kB' 'Inactive(file): 3293336 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11905132 kB' 'Mapped: 133412 kB' 'AnonPages: 305024 kB' 'Shmem: 8371996 kB' 'KernelStack: 12504 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 179664 kB' 'Slab: 533208 kB' 'SReclaimable: 179664 kB' 'SUnreclaim: 353544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.709 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.710 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:16.711 node0=512 expecting 512 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:16.711 node1=1024 expecting 1024 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:16.711 00:04:16.711 real 0m3.650s 00:04:16.711 user 0m1.424s 00:04:16.711 sys 0m2.263s 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.711 19:08:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.711 ************************************ 00:04:16.711 END TEST custom_alloc 00:04:16.711 ************************************ 00:04:16.711 19:08:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:16.711 19:08:35 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:16.711 19:08:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.711 19:08:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.711 19:08:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.711 ************************************ 00:04:16.711 START TEST no_shrink_alloc 00:04:16.711 ************************************ 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.711 19:08:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:20.017 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:20.017 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105250568 kB' 'MemAvailable: 108576028 kB' 'Buffers: 2704 kB' 'Cached: 14642148 kB' 'SwapCached: 0 kB' 'Active: 11614572 kB' 'Inactive: 3518544 kB' 'Active(anon): 11135348 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491068 kB' 'Mapped: 222268 kB' 'Shmem: 10647084 kB' 'KReclaimable: 304048 kB' 'Slab: 1125328 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 821280 kB' 'KernelStack: 27136 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12650624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.017 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.018 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105250848 kB' 'MemAvailable: 108576308 kB' 'Buffers: 2704 kB' 'Cached: 14642164 kB' 'SwapCached: 0 kB' 'Active: 11614044 kB' 'Inactive: 3518544 kB' 'Active(anon): 11134820 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490992 kB' 'Mapped: 222192 kB' 'Shmem: 10647100 kB' 'KReclaimable: 304048 kB' 'Slab: 1125320 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 821272 kB' 'KernelStack: 27152 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12651008 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.019 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.020 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.020 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.020 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.020 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.020 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.285 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105251308 kB' 'MemAvailable: 108576768 kB' 'Buffers: 2704 kB' 'Cached: 14642180 kB' 'SwapCached: 0 kB' 'Active: 11614084 kB' 'Inactive: 3518544 kB' 'Active(anon): 11134860 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490992 kB' 'Mapped: 222192 kB' 'Shmem: 10647116 kB' 'KReclaimable: 304048 kB' 'Slab: 1125320 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 821272 kB' 'KernelStack: 27152 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12651032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.286 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.287 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:20.288 nr_hugepages=1024 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.288 resv_hugepages=0 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.288 surplus_hugepages=0 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.288 anon_hugepages=0 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105250552 kB' 'MemAvailable: 108576012 kB' 'Buffers: 2704 kB' 'Cached: 14642200 kB' 'SwapCached: 0 kB' 'Active: 11614064 kB' 'Inactive: 3518544 kB' 'Active(anon): 11134840 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490992 kB' 'Mapped: 222192 kB' 'Shmem: 10647136 kB' 'KReclaimable: 304048 kB' 'Slab: 1125320 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 821272 kB' 'KernelStack: 27152 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12651056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.288 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.289 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.290 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58183968 kB' 'MemUsed: 7475040 kB' 'SwapCached: 0 kB' 'Active: 2698292 kB' 'Inactive: 225208 kB' 'Active(anon): 2458868 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 225208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2739712 kB' 'Mapped: 88776 kB' 'AnonPages: 186976 kB' 'Shmem: 2275080 kB' 'KernelStack: 14632 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124384 kB' 'Slab: 592180 kB' 'SReclaimable: 124384 kB' 'SUnreclaim: 467796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.291 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:20.292 node0=1024 expecting 1024 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.292 19:08:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.599 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:23.599 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:23.599 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105268496 kB' 'MemAvailable: 108593956 kB' 'Buffers: 2704 kB' 'Cached: 14642296 kB' 'SwapCached: 0 kB' 'Active: 11619768 kB' 'Inactive: 3518544 kB' 'Active(anon): 11140544 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496084 kB' 'Mapped: 222828 kB' 'Shmem: 10647232 kB' 'KReclaimable: 304048 kB' 'Slab: 1125024 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820976 kB' 'KernelStack: 27072 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12655140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.599 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.600 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105264324 kB' 'MemAvailable: 108589784 kB' 'Buffers: 2704 kB' 'Cached: 14642300 kB' 'SwapCached: 0 kB' 'Active: 11621640 kB' 'Inactive: 3518544 kB' 'Active(anon): 11142416 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498500 kB' 'Mapped: 223108 kB' 'Shmem: 10647236 kB' 'KReclaimable: 304048 kB' 'Slab: 1125008 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820960 kB' 'KernelStack: 27104 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12657548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235336 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.601 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.602 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105263864 kB' 'MemAvailable: 108589324 kB' 'Buffers: 2704 kB' 'Cached: 14642320 kB' 'SwapCached: 0 kB' 'Active: 11615876 kB' 'Inactive: 3518544 kB' 'Active(anon): 11136652 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492732 kB' 'Mapped: 222204 kB' 'Shmem: 10647256 kB' 'KReclaimable: 304048 kB' 'Slab: 1125008 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820960 kB' 'KernelStack: 27072 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12651584 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.603 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.604 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.868 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.869 nr_hugepages=1024 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.869 resv_hugepages=0 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.869 surplus_hugepages=0 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.869 anon_hugepages=0 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105264292 kB' 'MemAvailable: 108589752 kB' 'Buffers: 2704 kB' 'Cached: 14642320 kB' 'SwapCached: 0 kB' 'Active: 11615580 kB' 'Inactive: 3518544 kB' 'Active(anon): 11136356 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492488 kB' 'Mapped: 222204 kB' 'Shmem: 10647256 kB' 'KReclaimable: 304048 kB' 'Slab: 1125008 kB' 'SReclaimable: 304048 kB' 'SUnreclaim: 820960 kB' 'KernelStack: 27072 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12651608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3964276 kB' 'DirectMap2M: 30318592 kB' 'DirectMap1G: 101711872 kB' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.869 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.870 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58202028 kB' 'MemUsed: 7456980 kB' 'SwapCached: 0 kB' 'Active: 2699604 kB' 'Inactive: 225208 kB' 'Active(anon): 2460180 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 225208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2739764 kB' 'Mapped: 88788 kB' 'AnonPages: 188280 kB' 'Shmem: 2275132 kB' 'KernelStack: 14616 kB' 'PageTables: 5016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124384 kB' 'Slab: 591836 kB' 'SReclaimable: 124384 kB' 'SUnreclaim: 467452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.871 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.872 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.873 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.873 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.873 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.873 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.873 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.873 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.873 node0=1024 expecting 1024 00:04:23.873 19:08:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.873 00:04:23.873 real 0m7.032s 00:04:23.873 user 0m2.684s 00:04:23.873 sys 0m4.366s 00:04:23.873 19:08:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.873 19:08:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.873 ************************************ 00:04:23.873 END TEST no_shrink_alloc 00:04:23.873 ************************************ 00:04:23.873 19:08:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:23.873 19:08:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:23.873 00:04:23.873 real 0m26.148s 00:04:23.873 user 0m10.163s 00:04:23.873 sys 0m16.121s 00:04:23.873 19:08:42 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.873 19:08:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.873 ************************************ 00:04:23.873 END TEST hugepages 00:04:23.873 ************************************ 00:04:23.873 19:08:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:23.873 19:08:42 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:23.873 19:08:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.873 19:08:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.873 19:08:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.873 ************************************ 00:04:23.873 START TEST driver 00:04:23.873 ************************************ 00:04:23.873 19:08:42 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:24.133 * Looking for test storage... 00:04:24.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.133 19:08:42 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:24.133 19:08:42 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.133 19:08:42 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.426 19:08:47 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:29.426 19:08:47 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.426 19:08:47 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.426 19:08:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.426 ************************************ 00:04:29.426 START TEST guess_driver 00:04:29.426 ************************************ 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:29.426 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.426 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.426 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.426 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.426 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:29.426 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:29.426 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:29.426 Looking for driver=vfio-pci 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.426 19:08:47 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.732 19:08:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.022 00:04:38.022 real 0m8.737s 00:04:38.022 user 0m2.892s 00:04:38.022 sys 0m5.054s 00:04:38.022 19:08:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.022 19:08:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.022 ************************************ 00:04:38.022 END TEST guess_driver 00:04:38.022 ************************************ 00:04:38.022 19:08:56 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:38.022 00:04:38.022 real 0m13.760s 00:04:38.022 user 0m4.363s 00:04:38.022 sys 0m7.816s 00:04:38.022 19:08:56 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.022 19:08:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.022 ************************************ 00:04:38.022 END TEST driver 00:04:38.022 ************************************ 00:04:38.022 19:08:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:38.022 19:08:56 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:38.022 19:08:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.022 19:08:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.022 19:08:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.022 ************************************ 00:04:38.022 START TEST devices 00:04:38.022 ************************************ 00:04:38.022 19:08:56 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:38.022 * Looking for test storage... 00:04:38.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:38.022 19:08:56 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:38.022 19:08:56 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:38.022 19:08:56 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.022 19:08:56 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:42.272 19:09:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:42.272 19:09:00 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:42.272 No valid GPT data, bailing 00:04:42.272 19:09:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:42.272 19:09:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:42.272 19:09:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:42.272 19:09:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:42.272 19:09:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:42.272 19:09:00 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:42.272 19:09:00 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.272 19:09:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.272 ************************************ 00:04:42.272 START TEST nvme_mount 00:04:42.272 ************************************ 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.272 19:09:00 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:43.214 Creating new GPT entries in memory. 00:04:43.214 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:43.214 other utilities. 00:04:43.214 19:09:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:43.214 19:09:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.214 19:09:01 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.214 19:09:01 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.214 19:09:01 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:44.155 Creating new GPT entries in memory. 00:04:44.155 The operation has completed successfully. 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2643321 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.155 19:09:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.460 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.461 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.461 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.461 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.461 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.461 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.461 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.461 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.461 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.721 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.721 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.983 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:47.983 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:47.983 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.983 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.983 19:09:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.286 19:09:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.286 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.287 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.547 19:09:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:54.851 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.851 00:04:54.851 real 0m12.833s 00:04:54.851 user 0m3.758s 00:04:54.851 sys 0m6.807s 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.851 19:09:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:54.851 ************************************ 00:04:54.851 END TEST nvme_mount 00:04:54.851 ************************************ 00:04:54.851 19:09:13 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:54.851 19:09:13 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:54.851 19:09:13 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.851 19:09:13 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.851 19:09:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:54.851 ************************************ 00:04:54.851 START TEST dm_mount 00:04:54.851 ************************************ 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:54.851 19:09:13 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:56.237 Creating new GPT entries in memory. 00:04:56.237 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:56.237 other utilities. 00:04:56.237 19:09:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:56.237 19:09:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.237 19:09:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.237 19:09:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.237 19:09:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:57.182 Creating new GPT entries in memory. 00:04:57.182 The operation has completed successfully. 00:04:57.182 19:09:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.182 19:09:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.182 19:09:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.182 19:09:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.182 19:09:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:58.215 The operation has completed successfully. 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2648861 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.215 19:09:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.520 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.782 19:09:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.086 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:05.087 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:05.087 00:05:05.087 real 0m10.241s 00:05:05.087 user 0m2.582s 00:05:05.087 sys 0m4.716s 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.087 19:09:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:05.087 ************************************ 00:05:05.087 END TEST dm_mount 00:05:05.087 ************************************ 00:05:05.087 19:09:24 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:05.087 19:09:24 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:05.087 19:09:24 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:05.087 19:09:24 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.087 19:09:24 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.087 19:09:24 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:05.087 19:09:24 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.087 19:09:24 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.347 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:05.347 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:05.347 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:05.347 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:05.347 19:09:24 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:05.347 19:09:24 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.347 19:09:24 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:05.347 19:09:24 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.347 19:09:24 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:05.347 19:09:24 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.347 19:09:24 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:05.607 00:05:05.607 real 0m27.725s 00:05:05.607 user 0m8.070s 00:05:05.607 sys 0m14.323s 00:05:05.607 19:09:24 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.607 19:09:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:05.607 ************************************ 00:05:05.607 END TEST devices 00:05:05.607 ************************************ 00:05:05.607 19:09:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:05.607 00:05:05.607 real 1m33.557s 00:05:05.607 user 0m31.298s 00:05:05.607 sys 0m53.192s 00:05:05.607 19:09:24 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.607 19:09:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:05.607 ************************************ 00:05:05.607 END TEST setup.sh 00:05:05.607 ************************************ 00:05:05.607 19:09:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.607 19:09:24 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:08.907 Hugepages 00:05:08.907 node hugesize free / total 00:05:08.907 node0 1048576kB 0 / 0 00:05:08.907 node0 2048kB 2048 / 2048 00:05:08.907 node1 1048576kB 0 / 0 00:05:08.907 node1 2048kB 0 / 0 00:05:08.907 00:05:08.907 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.907 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:08.907 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:08.907 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:08.907 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:08.907 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:08.907 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:08.907 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:08.907 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:08.907 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:08.907 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:08.907 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:08.907 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:08.907 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:08.907 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:08.907 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:08.907 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:08.907 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:08.907 19:09:27 -- spdk/autotest.sh@130 -- # uname -s 00:05:08.907 19:09:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:08.907 19:09:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:08.907 19:09:27 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.212 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:12.212 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:12.213 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:14.127 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:14.388 19:09:33 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:15.328 19:09:34 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:15.328 19:09:34 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:15.328 19:09:34 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:15.328 19:09:34 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:15.328 19:09:34 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:15.328 19:09:34 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:15.328 19:09:34 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:15.329 19:09:34 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:15.329 19:09:34 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:15.329 19:09:34 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:15.329 19:09:34 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:15.329 19:09:34 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.632 Waiting for block devices as requested 00:05:18.632 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:18.632 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:18.632 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:18.632 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:18.632 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:18.632 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:18.632 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:18.892 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:18.892 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:19.153 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:19.153 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:19.153 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:19.153 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:19.414 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:19.414 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:19.414 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:19.673 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:19.933 19:09:38 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:19.933 19:09:38 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:19.933 19:09:38 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:19.933 19:09:38 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:19.933 19:09:38 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:19.933 19:09:38 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:19.933 19:09:38 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:19.933 19:09:38 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:19.933 19:09:38 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:19.933 19:09:38 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:19.933 19:09:38 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:19.933 19:09:38 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:19.933 19:09:38 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:19.933 19:09:38 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:19.933 19:09:38 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:19.933 19:09:38 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:19.933 19:09:38 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:19.933 19:09:38 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:19.933 19:09:38 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:19.933 19:09:38 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:19.933 19:09:38 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:19.933 19:09:38 -- common/autotest_common.sh@1557 -- # continue 00:05:19.933 19:09:38 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:19.933 19:09:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.933 19:09:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.933 19:09:38 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:19.933 19:09:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.933 19:09:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.933 19:09:38 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:23.232 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:23.232 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:23.232 19:09:42 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:23.232 19:09:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.232 19:09:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.232 19:09:42 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:23.232 19:09:42 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:23.232 19:09:42 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:23.232 19:09:42 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:23.232 19:09:42 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:23.232 19:09:42 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:23.232 19:09:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:23.232 19:09:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:23.232 19:09:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.232 19:09:42 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:23.232 19:09:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:23.493 19:09:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:23.493 19:09:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:23.493 19:09:42 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:23.493 19:09:42 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:23.493 19:09:42 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:23.493 19:09:42 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:23.493 19:09:42 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:23.493 19:09:42 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:23.493 19:09:42 -- common/autotest_common.sh@1593 -- # return 0 00:05:23.493 19:09:42 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:23.493 19:09:42 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:23.493 19:09:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:23.493 19:09:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:23.493 19:09:42 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:23.493 19:09:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.493 19:09:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.493 19:09:42 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:23.493 19:09:42 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:23.493 19:09:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.493 19:09:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.493 19:09:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.493 ************************************ 00:05:23.493 START TEST env 00:05:23.493 ************************************ 00:05:23.493 19:09:42 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:23.493 * Looking for test storage... 00:05:23.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:23.493 19:09:42 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.493 19:09:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.493 19:09:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.493 19:09:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.493 ************************************ 00:05:23.493 START TEST env_memory 00:05:23.493 ************************************ 00:05:23.493 19:09:42 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.784 00:05:23.784 00:05:23.784 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.784 http://cunit.sourceforge.net/ 00:05:23.784 00:05:23.784 00:05:23.784 Suite: memory 00:05:23.785 Test: alloc and free memory map ...[2024-07-22 19:09:42.511900] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:23.785 passed 00:05:23.785 Test: mem map translation ...[2024-07-22 19:09:42.553823] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:23.785 [2024-07-22 19:09:42.553859] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:23.785 [2024-07-22 19:09:42.553924] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:23.785 [2024-07-22 19:09:42.553941] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:23.785 passed 00:05:23.785 Test: mem map registration ...[2024-07-22 19:09:42.627415] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:23.785 [2024-07-22 19:09:42.627442] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:23.785 passed 00:05:24.047 Test: mem map adjacent registrations ...passed 00:05:24.047 00:05:24.047 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.047 suites 1 1 n/a 0 0 00:05:24.047 tests 4 4 4 0 0 00:05:24.047 asserts 152 152 152 0 n/a 00:05:24.047 00:05:24.047 Elapsed time = 0.259 seconds 00:05:24.047 00:05:24.047 real 0m0.296s 00:05:24.047 user 0m0.267s 00:05:24.047 sys 0m0.028s 00:05:24.047 19:09:42 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.047 19:09:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:24.047 ************************************ 00:05:24.047 END TEST env_memory 00:05:24.047 ************************************ 00:05:24.047 19:09:42 env -- common/autotest_common.sh@1142 -- # return 0 00:05:24.047 19:09:42 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:24.047 19:09:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.047 19:09:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.047 19:09:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.047 ************************************ 00:05:24.047 START TEST env_vtophys 00:05:24.047 ************************************ 00:05:24.047 19:09:42 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:24.047 EAL: lib.eal log level changed from notice to debug 00:05:24.047 EAL: Detected lcore 0 as core 0 on socket 0 00:05:24.047 EAL: Detected lcore 1 as core 1 on socket 0 00:05:24.047 EAL: Detected lcore 2 as core 2 on socket 0 00:05:24.047 EAL: Detected lcore 3 as core 3 on socket 0 00:05:24.047 EAL: Detected lcore 4 as core 4 on socket 0 00:05:24.047 EAL: Detected lcore 5 as core 5 on socket 0 00:05:24.047 EAL: Detected lcore 6 as core 6 on socket 0 00:05:24.047 EAL: Detected lcore 7 as core 7 on socket 0 00:05:24.047 EAL: Detected lcore 8 as core 8 on socket 0 00:05:24.047 EAL: Detected lcore 9 as core 9 on socket 0 00:05:24.047 EAL: Detected lcore 10 as core 10 on socket 0 00:05:24.047 EAL: Detected lcore 11 as core 11 on socket 0 00:05:24.047 EAL: Detected lcore 12 as core 12 on socket 0 00:05:24.047 EAL: Detected lcore 13 as core 13 on socket 0 00:05:24.047 EAL: Detected lcore 14 as core 14 on socket 0 00:05:24.047 EAL: Detected lcore 15 as core 15 on socket 0 00:05:24.047 EAL: Detected lcore 16 as core 16 on socket 0 00:05:24.047 EAL: Detected lcore 17 as core 17 on socket 0 00:05:24.047 EAL: Detected lcore 18 as core 18 on socket 0 00:05:24.047 EAL: Detected lcore 19 as core 19 on socket 0 00:05:24.047 EAL: Detected lcore 20 as core 20 on socket 0 00:05:24.047 EAL: Detected lcore 21 as core 21 on socket 0 00:05:24.047 EAL: Detected lcore 22 as core 22 on socket 0 00:05:24.047 EAL: Detected lcore 23 as core 23 on socket 0 00:05:24.047 EAL: Detected lcore 24 as core 24 on socket 0 00:05:24.047 EAL: Detected lcore 25 as core 25 on socket 0 00:05:24.047 EAL: Detected lcore 26 as core 26 on socket 0 00:05:24.047 EAL: Detected lcore 27 as core 27 on socket 0 00:05:24.047 EAL: Detected lcore 28 as core 28 on socket 0 00:05:24.047 EAL: Detected lcore 29 as core 29 on socket 0 00:05:24.047 EAL: Detected lcore 30 as core 30 on socket 0 00:05:24.047 EAL: Detected lcore 31 as core 31 on socket 0 00:05:24.047 EAL: Detected lcore 32 as core 32 on socket 0 00:05:24.047 EAL: Detected lcore 33 as core 33 on socket 0 00:05:24.047 EAL: Detected lcore 34 as core 34 on socket 0 00:05:24.047 EAL: Detected lcore 35 as core 35 on socket 0 00:05:24.047 EAL: Detected lcore 36 as core 0 on socket 1 00:05:24.047 EAL: Detected lcore 37 as core 1 on socket 1 00:05:24.047 EAL: Detected lcore 38 as core 2 on socket 1 00:05:24.047 EAL: Detected lcore 39 as core 3 on socket 1 00:05:24.047 EAL: Detected lcore 40 as core 4 on socket 1 00:05:24.047 EAL: Detected lcore 41 as core 5 on socket 1 00:05:24.047 EAL: Detected lcore 42 as core 6 on socket 1 00:05:24.047 EAL: Detected lcore 43 as core 7 on socket 1 00:05:24.047 EAL: Detected lcore 44 as core 8 on socket 1 00:05:24.047 EAL: Detected lcore 45 as core 9 on socket 1 00:05:24.047 EAL: Detected lcore 46 as core 10 on socket 1 00:05:24.047 EAL: Detected lcore 47 as core 11 on socket 1 00:05:24.047 EAL: Detected lcore 48 as core 12 on socket 1 00:05:24.047 EAL: Detected lcore 49 as core 13 on socket 1 00:05:24.047 EAL: Detected lcore 50 as core 14 on socket 1 00:05:24.047 EAL: Detected lcore 51 as core 15 on socket 1 00:05:24.047 EAL: Detected lcore 52 as core 16 on socket 1 00:05:24.047 EAL: Detected lcore 53 as core 17 on socket 1 00:05:24.047 EAL: Detected lcore 54 as core 18 on socket 1 00:05:24.047 EAL: Detected lcore 55 as core 19 on socket 1 00:05:24.047 EAL: Detected lcore 56 as core 20 on socket 1 00:05:24.047 EAL: Detected lcore 57 as core 21 on socket 1 00:05:24.047 EAL: Detected lcore 58 as core 22 on socket 1 00:05:24.047 EAL: Detected lcore 59 as core 23 on socket 1 00:05:24.047 EAL: Detected lcore 60 as core 24 on socket 1 00:05:24.047 EAL: Detected lcore 61 as core 25 on socket 1 00:05:24.047 EAL: Detected lcore 62 as core 26 on socket 1 00:05:24.047 EAL: Detected lcore 63 as core 27 on socket 1 00:05:24.047 EAL: Detected lcore 64 as core 28 on socket 1 00:05:24.047 EAL: Detected lcore 65 as core 29 on socket 1 00:05:24.047 EAL: Detected lcore 66 as core 30 on socket 1 00:05:24.047 EAL: Detected lcore 67 as core 31 on socket 1 00:05:24.047 EAL: Detected lcore 68 as core 32 on socket 1 00:05:24.047 EAL: Detected lcore 69 as core 33 on socket 1 00:05:24.047 EAL: Detected lcore 70 as core 34 on socket 1 00:05:24.047 EAL: Detected lcore 71 as core 35 on socket 1 00:05:24.047 EAL: Detected lcore 72 as core 0 on socket 0 00:05:24.047 EAL: Detected lcore 73 as core 1 on socket 0 00:05:24.047 EAL: Detected lcore 74 as core 2 on socket 0 00:05:24.047 EAL: Detected lcore 75 as core 3 on socket 0 00:05:24.047 EAL: Detected lcore 76 as core 4 on socket 0 00:05:24.047 EAL: Detected lcore 77 as core 5 on socket 0 00:05:24.047 EAL: Detected lcore 78 as core 6 on socket 0 00:05:24.047 EAL: Detected lcore 79 as core 7 on socket 0 00:05:24.047 EAL: Detected lcore 80 as core 8 on socket 0 00:05:24.047 EAL: Detected lcore 81 as core 9 on socket 0 00:05:24.047 EAL: Detected lcore 82 as core 10 on socket 0 00:05:24.047 EAL: Detected lcore 83 as core 11 on socket 0 00:05:24.047 EAL: Detected lcore 84 as core 12 on socket 0 00:05:24.047 EAL: Detected lcore 85 as core 13 on socket 0 00:05:24.047 EAL: Detected lcore 86 as core 14 on socket 0 00:05:24.047 EAL: Detected lcore 87 as core 15 on socket 0 00:05:24.047 EAL: Detected lcore 88 as core 16 on socket 0 00:05:24.047 EAL: Detected lcore 89 as core 17 on socket 0 00:05:24.047 EAL: Detected lcore 90 as core 18 on socket 0 00:05:24.047 EAL: Detected lcore 91 as core 19 on socket 0 00:05:24.047 EAL: Detected lcore 92 as core 20 on socket 0 00:05:24.047 EAL: Detected lcore 93 as core 21 on socket 0 00:05:24.047 EAL: Detected lcore 94 as core 22 on socket 0 00:05:24.047 EAL: Detected lcore 95 as core 23 on socket 0 00:05:24.047 EAL: Detected lcore 96 as core 24 on socket 0 00:05:24.047 EAL: Detected lcore 97 as core 25 on socket 0 00:05:24.047 EAL: Detected lcore 98 as core 26 on socket 0 00:05:24.047 EAL: Detected lcore 99 as core 27 on socket 0 00:05:24.047 EAL: Detected lcore 100 as core 28 on socket 0 00:05:24.047 EAL: Detected lcore 101 as core 29 on socket 0 00:05:24.047 EAL: Detected lcore 102 as core 30 on socket 0 00:05:24.047 EAL: Detected lcore 103 as core 31 on socket 0 00:05:24.047 EAL: Detected lcore 104 as core 32 on socket 0 00:05:24.047 EAL: Detected lcore 105 as core 33 on socket 0 00:05:24.047 EAL: Detected lcore 106 as core 34 on socket 0 00:05:24.047 EAL: Detected lcore 107 as core 35 on socket 0 00:05:24.047 EAL: Detected lcore 108 as core 0 on socket 1 00:05:24.047 EAL: Detected lcore 109 as core 1 on socket 1 00:05:24.047 EAL: Detected lcore 110 as core 2 on socket 1 00:05:24.047 EAL: Detected lcore 111 as core 3 on socket 1 00:05:24.047 EAL: Detected lcore 112 as core 4 on socket 1 00:05:24.047 EAL: Detected lcore 113 as core 5 on socket 1 00:05:24.047 EAL: Detected lcore 114 as core 6 on socket 1 00:05:24.047 EAL: Detected lcore 115 as core 7 on socket 1 00:05:24.047 EAL: Detected lcore 116 as core 8 on socket 1 00:05:24.047 EAL: Detected lcore 117 as core 9 on socket 1 00:05:24.047 EAL: Detected lcore 118 as core 10 on socket 1 00:05:24.047 EAL: Detected lcore 119 as core 11 on socket 1 00:05:24.047 EAL: Detected lcore 120 as core 12 on socket 1 00:05:24.047 EAL: Detected lcore 121 as core 13 on socket 1 00:05:24.047 EAL: Detected lcore 122 as core 14 on socket 1 00:05:24.047 EAL: Detected lcore 123 as core 15 on socket 1 00:05:24.047 EAL: Detected lcore 124 as core 16 on socket 1 00:05:24.047 EAL: Detected lcore 125 as core 17 on socket 1 00:05:24.047 EAL: Detected lcore 126 as core 18 on socket 1 00:05:24.047 EAL: Detected lcore 127 as core 19 on socket 1 00:05:24.047 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:24.047 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:24.047 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:24.047 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:24.047 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:24.047 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:24.047 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:24.047 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:24.047 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:24.047 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:24.047 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:24.047 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:24.047 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:24.047 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:24.047 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:24.047 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:24.047 EAL: Maximum logical cores by configuration: 128 00:05:24.047 EAL: Detected CPU lcores: 128 00:05:24.047 EAL: Detected NUMA nodes: 2 00:05:24.047 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:24.047 EAL: Detected shared linkage of DPDK 00:05:24.047 EAL: No shared files mode enabled, IPC will be disabled 00:05:24.047 EAL: Bus pci wants IOVA as 'DC' 00:05:24.047 EAL: Buses did not request a specific IOVA mode. 00:05:24.047 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:24.047 EAL: Selected IOVA mode 'VA' 00:05:24.048 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.048 EAL: Probing VFIO support... 00:05:24.048 EAL: IOMMU type 1 (Type 1) is supported 00:05:24.048 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:24.048 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:24.048 EAL: VFIO support initialized 00:05:24.048 EAL: Ask a virtual area of 0x2e000 bytes 00:05:24.048 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:24.048 EAL: Setting up physically contiguous memory... 00:05:24.048 EAL: Setting maximum number of open files to 524288 00:05:24.048 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:24.048 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:24.048 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:24.048 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.048 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:24.048 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.048 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.048 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:24.048 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:24.048 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.048 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:24.048 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.048 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.048 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:24.048 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:24.048 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.048 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:24.048 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.048 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.048 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:24.048 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:24.048 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.048 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:24.048 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.048 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.048 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:24.048 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:24.048 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:24.048 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.048 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:24.048 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:24.048 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.048 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:24.048 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:24.048 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.048 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:24.048 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:24.048 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.048 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:24.048 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:24.048 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.048 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:24.048 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:24.048 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.048 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:24.048 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:24.048 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.048 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:24.048 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:24.048 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.048 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:24.048 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:24.048 EAL: Hugepages will be freed exactly as allocated. 00:05:24.048 EAL: No shared files mode enabled, IPC is disabled 00:05:24.048 EAL: No shared files mode enabled, IPC is disabled 00:05:24.048 EAL: TSC frequency is ~2400000 KHz 00:05:24.048 EAL: Main lcore 0 is ready (tid=7f4a4ce12a40;cpuset=[0]) 00:05:24.048 EAL: Trying to obtain current memory policy. 00:05:24.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.048 EAL: Restoring previous memory policy: 0 00:05:24.048 EAL: request: mp_malloc_sync 00:05:24.048 EAL: No shared files mode enabled, IPC is disabled 00:05:24.048 EAL: Heap on socket 0 was expanded by 2MB 00:05:24.048 EAL: No shared files mode enabled, IPC is disabled 00:05:24.048 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:24.048 EAL: Mem event callback 'spdk:(nil)' registered 00:05:24.048 00:05:24.048 00:05:24.048 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.048 http://cunit.sourceforge.net/ 00:05:24.048 00:05:24.048 00:05:24.048 Suite: components_suite 00:05:24.618 Test: vtophys_malloc_test ...passed 00:05:24.618 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:24.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.618 EAL: Restoring previous memory policy: 4 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was expanded by 4MB 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was shrunk by 4MB 00:05:24.618 EAL: Trying to obtain current memory policy. 00:05:24.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.618 EAL: Restoring previous memory policy: 4 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was expanded by 6MB 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was shrunk by 6MB 00:05:24.618 EAL: Trying to obtain current memory policy. 00:05:24.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.618 EAL: Restoring previous memory policy: 4 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was expanded by 10MB 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was shrunk by 10MB 00:05:24.618 EAL: Trying to obtain current memory policy. 00:05:24.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.618 EAL: Restoring previous memory policy: 4 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was expanded by 18MB 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was shrunk by 18MB 00:05:24.618 EAL: Trying to obtain current memory policy. 00:05:24.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.618 EAL: Restoring previous memory policy: 4 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was expanded by 34MB 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was shrunk by 34MB 00:05:24.618 EAL: Trying to obtain current memory policy. 00:05:24.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.618 EAL: Restoring previous memory policy: 4 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was expanded by 66MB 00:05:24.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.618 EAL: request: mp_malloc_sync 00:05:24.618 EAL: No shared files mode enabled, IPC is disabled 00:05:24.618 EAL: Heap on socket 0 was shrunk by 66MB 00:05:24.878 EAL: Trying to obtain current memory policy. 00:05:24.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.878 EAL: Restoring previous memory policy: 4 00:05:24.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.878 EAL: request: mp_malloc_sync 00:05:24.878 EAL: No shared files mode enabled, IPC is disabled 00:05:24.878 EAL: Heap on socket 0 was expanded by 130MB 00:05:24.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.878 EAL: request: mp_malloc_sync 00:05:24.878 EAL: No shared files mode enabled, IPC is disabled 00:05:24.878 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.139 EAL: Trying to obtain current memory policy. 00:05:25.139 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.139 EAL: Restoring previous memory policy: 4 00:05:25.139 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.139 EAL: request: mp_malloc_sync 00:05:25.139 EAL: No shared files mode enabled, IPC is disabled 00:05:25.139 EAL: Heap on socket 0 was expanded by 258MB 00:05:25.400 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.400 EAL: request: mp_malloc_sync 00:05:25.400 EAL: No shared files mode enabled, IPC is disabled 00:05:25.400 EAL: Heap on socket 0 was shrunk by 258MB 00:05:25.660 EAL: Trying to obtain current memory policy. 00:05:25.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.921 EAL: Restoring previous memory policy: 4 00:05:25.921 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.921 EAL: request: mp_malloc_sync 00:05:25.921 EAL: No shared files mode enabled, IPC is disabled 00:05:25.921 EAL: Heap on socket 0 was expanded by 514MB 00:05:26.493 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.493 EAL: request: mp_malloc_sync 00:05:26.493 EAL: No shared files mode enabled, IPC is disabled 00:05:26.493 EAL: Heap on socket 0 was shrunk by 514MB 00:05:27.064 EAL: Trying to obtain current memory policy. 00:05:27.064 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.324 EAL: Restoring previous memory policy: 4 00:05:27.324 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.324 EAL: request: mp_malloc_sync 00:05:27.324 EAL: No shared files mode enabled, IPC is disabled 00:05:27.324 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.709 EAL: request: mp_malloc_sync 00:05:28.709 EAL: No shared files mode enabled, IPC is disabled 00:05:28.709 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:29.648 passed 00:05:29.648 00:05:29.648 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.648 suites 1 1 n/a 0 0 00:05:29.648 tests 2 2 2 0 0 00:05:29.648 asserts 497 497 497 0 n/a 00:05:29.648 00:05:29.648 Elapsed time = 5.513 seconds 00:05:29.648 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.648 EAL: request: mp_malloc_sync 00:05:29.648 EAL: No shared files mode enabled, IPC is disabled 00:05:29.648 EAL: Heap on socket 0 was shrunk by 2MB 00:05:29.648 EAL: No shared files mode enabled, IPC is disabled 00:05:29.648 EAL: No shared files mode enabled, IPC is disabled 00:05:29.648 EAL: No shared files mode enabled, IPC is disabled 00:05:29.648 00:05:29.648 real 0m5.759s 00:05:29.648 user 0m4.988s 00:05:29.648 sys 0m0.725s 00:05:29.648 19:09:48 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.648 19:09:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:29.648 ************************************ 00:05:29.648 END TEST env_vtophys 00:05:29.648 ************************************ 00:05:29.909 19:09:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.909 19:09:48 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:29.909 19:09:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.909 19:09:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.909 19:09:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.909 ************************************ 00:05:29.909 START TEST env_pci 00:05:29.909 ************************************ 00:05:29.909 19:09:48 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:29.909 00:05:29.909 00:05:29.909 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.909 http://cunit.sourceforge.net/ 00:05:29.909 00:05:29.909 00:05:29.909 Suite: pci 00:05:29.909 Test: pci_hook ...[2024-07-22 19:09:48.689515] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2660916 has claimed it 00:05:29.909 EAL: Cannot find device (10000:00:01.0) 00:05:29.909 EAL: Failed to attach device on primary process 00:05:29.909 passed 00:05:29.909 00:05:29.909 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.909 suites 1 1 n/a 0 0 00:05:29.909 tests 1 1 1 0 0 00:05:29.909 asserts 25 25 25 0 n/a 00:05:29.909 00:05:29.909 Elapsed time = 0.055 seconds 00:05:29.909 00:05:29.909 real 0m0.135s 00:05:29.909 user 0m0.052s 00:05:29.909 sys 0m0.081s 00:05:29.909 19:09:48 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.909 19:09:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:29.909 ************************************ 00:05:29.909 END TEST env_pci 00:05:29.909 ************************************ 00:05:29.909 19:09:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.909 19:09:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:29.909 19:09:48 env -- env/env.sh@15 -- # uname 00:05:29.909 19:09:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:29.909 19:09:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:29.909 19:09:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.909 19:09:48 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:29.909 19:09:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.909 19:09:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.170 ************************************ 00:05:30.170 START TEST env_dpdk_post_init 00:05:30.170 ************************************ 00:05:30.170 19:09:48 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.170 EAL: Detected CPU lcores: 128 00:05:30.170 EAL: Detected NUMA nodes: 2 00:05:30.170 EAL: Detected shared linkage of DPDK 00:05:30.170 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.170 EAL: Selected IOVA mode 'VA' 00:05:30.170 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.170 EAL: VFIO support initialized 00:05:30.170 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:30.170 EAL: Using IOMMU type 1 (Type 1) 00:05:30.431 EAL: Ignore mapping IO port bar(1) 00:05:30.431 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:30.692 EAL: Ignore mapping IO port bar(1) 00:05:30.692 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:30.953 EAL: Ignore mapping IO port bar(1) 00:05:30.953 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:30.953 EAL: Ignore mapping IO port bar(1) 00:05:31.213 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:31.213 EAL: Ignore mapping IO port bar(1) 00:05:31.474 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:31.474 EAL: Ignore mapping IO port bar(1) 00:05:31.735 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:31.735 EAL: Ignore mapping IO port bar(1) 00:05:31.735 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:31.994 EAL: Ignore mapping IO port bar(1) 00:05:31.994 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:32.254 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:32.515 EAL: Ignore mapping IO port bar(1) 00:05:32.515 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:32.515 EAL: Ignore mapping IO port bar(1) 00:05:32.775 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:32.775 EAL: Ignore mapping IO port bar(1) 00:05:33.036 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:33.036 EAL: Ignore mapping IO port bar(1) 00:05:33.296 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:33.296 EAL: Ignore mapping IO port bar(1) 00:05:33.296 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:33.556 EAL: Ignore mapping IO port bar(1) 00:05:33.556 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:33.817 EAL: Ignore mapping IO port bar(1) 00:05:33.817 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:34.077 EAL: Ignore mapping IO port bar(1) 00:05:34.077 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:34.077 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:34.077 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:34.337 Starting DPDK initialization... 00:05:34.337 Starting SPDK post initialization... 00:05:34.337 SPDK NVMe probe 00:05:34.337 Attaching to 0000:65:00.0 00:05:34.337 Attached to 0000:65:00.0 00:05:34.337 Cleaning up... 00:05:36.249 00:05:36.249 real 0m5.838s 00:05:36.249 user 0m0.233s 00:05:36.249 sys 0m0.159s 00:05:36.249 19:09:54 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.249 19:09:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.249 ************************************ 00:05:36.249 END TEST env_dpdk_post_init 00:05:36.249 ************************************ 00:05:36.249 19:09:54 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.249 19:09:54 env -- env/env.sh@26 -- # uname 00:05:36.249 19:09:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:36.249 19:09:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.249 19:09:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.249 19:09:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.249 19:09:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.249 ************************************ 00:05:36.249 START TEST env_mem_callbacks 00:05:36.249 ************************************ 00:05:36.249 19:09:54 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.249 EAL: Detected CPU lcores: 128 00:05:36.249 EAL: Detected NUMA nodes: 2 00:05:36.249 EAL: Detected shared linkage of DPDK 00:05:36.249 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.249 EAL: Selected IOVA mode 'VA' 00:05:36.249 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.249 EAL: VFIO support initialized 00:05:36.249 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.249 00:05:36.249 00:05:36.249 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.249 http://cunit.sourceforge.net/ 00:05:36.249 00:05:36.249 00:05:36.249 Suite: memory 00:05:36.249 Test: test ... 00:05:36.249 register 0x200000200000 2097152 00:05:36.249 malloc 3145728 00:05:36.249 register 0x200000400000 4194304 00:05:36.249 buf 0x2000004fffc0 len 3145728 PASSED 00:05:36.249 malloc 64 00:05:36.249 buf 0x2000004ffec0 len 64 PASSED 00:05:36.249 malloc 4194304 00:05:36.249 register 0x200000800000 6291456 00:05:36.249 buf 0x2000009fffc0 len 4194304 PASSED 00:05:36.249 free 0x2000004fffc0 3145728 00:05:36.249 free 0x2000004ffec0 64 00:05:36.249 unregister 0x200000400000 4194304 PASSED 00:05:36.249 free 0x2000009fffc0 4194304 00:05:36.249 unregister 0x200000800000 6291456 PASSED 00:05:36.249 malloc 8388608 00:05:36.249 register 0x200000400000 10485760 00:05:36.249 buf 0x2000005fffc0 len 8388608 PASSED 00:05:36.249 free 0x2000005fffc0 8388608 00:05:36.250 unregister 0x200000400000 10485760 PASSED 00:05:36.250 passed 00:05:36.250 00:05:36.250 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.250 suites 1 1 n/a 0 0 00:05:36.250 tests 1 1 1 0 0 00:05:36.250 asserts 15 15 15 0 n/a 00:05:36.250 00:05:36.250 Elapsed time = 0.046 seconds 00:05:36.250 00:05:36.250 real 0m0.167s 00:05:36.250 user 0m0.083s 00:05:36.250 sys 0m0.082s 00:05:36.250 19:09:54 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.250 19:09:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:36.250 ************************************ 00:05:36.250 END TEST env_mem_callbacks 00:05:36.250 ************************************ 00:05:36.250 19:09:55 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.250 00:05:36.250 real 0m12.689s 00:05:36.250 user 0m5.791s 00:05:36.250 sys 0m1.434s 00:05:36.250 19:09:55 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.250 19:09:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.250 ************************************ 00:05:36.250 END TEST env 00:05:36.250 ************************************ 00:05:36.250 19:09:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.250 19:09:55 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:36.250 19:09:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.250 19:09:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.250 19:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:36.250 ************************************ 00:05:36.250 START TEST rpc 00:05:36.250 ************************************ 00:05:36.250 19:09:55 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:36.250 * Looking for test storage... 00:05:36.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:36.250 19:09:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2662280 00:05:36.250 19:09:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.250 19:09:55 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:36.250 19:09:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2662280 00:05:36.250 19:09:55 rpc -- common/autotest_common.sh@829 -- # '[' -z 2662280 ']' 00:05:36.250 19:09:55 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.250 19:09:55 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.250 19:09:55 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.250 19:09:55 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.250 19:09:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.511 [2024-07-22 19:09:55.291422] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:36.511 [2024-07-22 19:09:55.291573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662280 ] 00:05:36.511 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.511 [2024-07-22 19:09:55.417137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.772 [2024-07-22 19:09:55.596964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:36.772 [2024-07-22 19:09:55.597013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2662280' to capture a snapshot of events at runtime. 00:05:36.772 [2024-07-22 19:09:55.597024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:36.772 [2024-07-22 19:09:55.597034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:36.772 [2024-07-22 19:09:55.597043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2662280 for offline analysis/debug. 00:05:36.772 [2024-07-22 19:09:55.597072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.343 19:09:56 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.343 19:09:56 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:37.343 19:09:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.343 19:09:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.343 19:09:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:37.343 19:09:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:37.343 19:09:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.343 19:09:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.343 19:09:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.343 ************************************ 00:05:37.343 START TEST rpc_integrity 00:05:37.343 ************************************ 00:05:37.343 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:37.343 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.343 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.343 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.343 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.343 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.343 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:37.343 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.343 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.343 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.343 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.343 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.343 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:37.343 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.343 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.343 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.605 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.605 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.605 { 00:05:37.605 "name": "Malloc0", 00:05:37.605 "aliases": [ 00:05:37.605 "3d02cd9d-8744-438f-ac06-abd007c47f6e" 00:05:37.605 ], 00:05:37.605 "product_name": "Malloc disk", 00:05:37.605 "block_size": 512, 00:05:37.605 "num_blocks": 16384, 00:05:37.605 "uuid": "3d02cd9d-8744-438f-ac06-abd007c47f6e", 00:05:37.605 "assigned_rate_limits": { 00:05:37.605 "rw_ios_per_sec": 0, 00:05:37.605 "rw_mbytes_per_sec": 0, 00:05:37.605 "r_mbytes_per_sec": 0, 00:05:37.605 "w_mbytes_per_sec": 0 00:05:37.605 }, 00:05:37.605 "claimed": false, 00:05:37.605 "zoned": false, 00:05:37.605 "supported_io_types": { 00:05:37.605 "read": true, 00:05:37.605 "write": true, 00:05:37.605 "unmap": true, 00:05:37.605 "flush": true, 00:05:37.605 "reset": true, 00:05:37.605 "nvme_admin": false, 00:05:37.605 "nvme_io": false, 00:05:37.605 "nvme_io_md": false, 00:05:37.605 "write_zeroes": true, 00:05:37.605 "zcopy": true, 00:05:37.605 "get_zone_info": false, 00:05:37.605 "zone_management": false, 00:05:37.605 "zone_append": false, 00:05:37.605 "compare": false, 00:05:37.605 "compare_and_write": false, 00:05:37.605 "abort": true, 00:05:37.605 "seek_hole": false, 00:05:37.605 "seek_data": false, 00:05:37.605 "copy": true, 00:05:37.605 "nvme_iov_md": false 00:05:37.605 }, 00:05:37.605 "memory_domains": [ 00:05:37.605 { 00:05:37.605 "dma_device_id": "system", 00:05:37.605 "dma_device_type": 1 00:05:37.605 }, 00:05:37.605 { 00:05:37.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.605 "dma_device_type": 2 00:05:37.605 } 00:05:37.605 ], 00:05:37.605 "driver_specific": {} 00:05:37.605 } 00:05:37.605 ]' 00:05:37.605 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.605 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.605 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.605 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.605 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.605 [2024-07-22 19:09:56.350894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.605 [2024-07-22 19:09:56.350946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.605 [2024-07-22 19:09:56.350969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001ce80 00:05:37.605 [2024-07-22 19:09:56.350982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.605 [2024-07-22 19:09:56.353098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.605 [2024-07-22 19:09:56.353128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.605 Passthru0 00:05:37.605 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.605 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.605 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.605 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.605 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.605 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.605 { 00:05:37.605 "name": "Malloc0", 00:05:37.605 "aliases": [ 00:05:37.605 "3d02cd9d-8744-438f-ac06-abd007c47f6e" 00:05:37.605 ], 00:05:37.605 "product_name": "Malloc disk", 00:05:37.605 "block_size": 512, 00:05:37.605 "num_blocks": 16384, 00:05:37.605 "uuid": "3d02cd9d-8744-438f-ac06-abd007c47f6e", 00:05:37.605 "assigned_rate_limits": { 00:05:37.605 "rw_ios_per_sec": 0, 00:05:37.605 "rw_mbytes_per_sec": 0, 00:05:37.606 "r_mbytes_per_sec": 0, 00:05:37.606 "w_mbytes_per_sec": 0 00:05:37.606 }, 00:05:37.606 "claimed": true, 00:05:37.606 "claim_type": "exclusive_write", 00:05:37.606 "zoned": false, 00:05:37.606 "supported_io_types": { 00:05:37.606 "read": true, 00:05:37.606 "write": true, 00:05:37.606 "unmap": true, 00:05:37.606 "flush": true, 00:05:37.606 "reset": true, 00:05:37.606 "nvme_admin": false, 00:05:37.606 "nvme_io": false, 00:05:37.606 "nvme_io_md": false, 00:05:37.606 "write_zeroes": true, 00:05:37.606 "zcopy": true, 00:05:37.606 "get_zone_info": false, 00:05:37.606 "zone_management": false, 00:05:37.606 "zone_append": false, 00:05:37.606 "compare": false, 00:05:37.606 "compare_and_write": false, 00:05:37.606 "abort": true, 00:05:37.606 "seek_hole": false, 00:05:37.606 "seek_data": false, 00:05:37.606 "copy": true, 00:05:37.606 "nvme_iov_md": false 00:05:37.606 }, 00:05:37.606 "memory_domains": [ 00:05:37.606 { 00:05:37.606 "dma_device_id": "system", 00:05:37.606 "dma_device_type": 1 00:05:37.606 }, 00:05:37.606 { 00:05:37.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.606 "dma_device_type": 2 00:05:37.606 } 00:05:37.606 ], 00:05:37.606 "driver_specific": {} 00:05:37.606 }, 00:05:37.606 { 00:05:37.606 "name": "Passthru0", 00:05:37.606 "aliases": [ 00:05:37.606 "01d7c5df-b196-59d8-b4bf-41a2181b9e7e" 00:05:37.606 ], 00:05:37.606 "product_name": "passthru", 00:05:37.606 "block_size": 512, 00:05:37.606 "num_blocks": 16384, 00:05:37.606 "uuid": "01d7c5df-b196-59d8-b4bf-41a2181b9e7e", 00:05:37.606 "assigned_rate_limits": { 00:05:37.606 "rw_ios_per_sec": 0, 00:05:37.606 "rw_mbytes_per_sec": 0, 00:05:37.606 "r_mbytes_per_sec": 0, 00:05:37.606 "w_mbytes_per_sec": 0 00:05:37.606 }, 00:05:37.606 "claimed": false, 00:05:37.606 "zoned": false, 00:05:37.606 "supported_io_types": { 00:05:37.606 "read": true, 00:05:37.606 "write": true, 00:05:37.606 "unmap": true, 00:05:37.606 "flush": true, 00:05:37.606 "reset": true, 00:05:37.606 "nvme_admin": false, 00:05:37.606 "nvme_io": false, 00:05:37.606 "nvme_io_md": false, 00:05:37.606 "write_zeroes": true, 00:05:37.606 "zcopy": true, 00:05:37.606 "get_zone_info": false, 00:05:37.606 "zone_management": false, 00:05:37.606 "zone_append": false, 00:05:37.606 "compare": false, 00:05:37.606 "compare_and_write": false, 00:05:37.606 "abort": true, 00:05:37.606 "seek_hole": false, 00:05:37.606 "seek_data": false, 00:05:37.606 "copy": true, 00:05:37.606 "nvme_iov_md": false 00:05:37.606 }, 00:05:37.606 "memory_domains": [ 00:05:37.606 { 00:05:37.606 "dma_device_id": "system", 00:05:37.606 "dma_device_type": 1 00:05:37.606 }, 00:05:37.606 { 00:05:37.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.606 "dma_device_type": 2 00:05:37.606 } 00:05:37.606 ], 00:05:37.606 "driver_specific": { 00:05:37.606 "passthru": { 00:05:37.606 "name": "Passthru0", 00:05:37.606 "base_bdev_name": "Malloc0" 00:05:37.606 } 00:05:37.606 } 00:05:37.606 } 00:05:37.606 ]' 00:05:37.606 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.606 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.606 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.606 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.606 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.606 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.606 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.606 19:09:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.606 00:05:37.606 real 0m0.314s 00:05:37.606 user 0m0.184s 00:05:37.606 sys 0m0.046s 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.606 19:09:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.606 ************************************ 00:05:37.606 END TEST rpc_integrity 00:05:37.606 ************************************ 00:05:37.606 19:09:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:37.606 19:09:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:37.606 19:09:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.606 19:09:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.606 19:09:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.867 ************************************ 00:05:37.867 START TEST rpc_plugins 00:05:37.867 ************************************ 00:05:37.867 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:37.867 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:37.867 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.867 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.867 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.867 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:37.867 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:37.867 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.868 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.868 { 00:05:37.868 "name": "Malloc1", 00:05:37.868 "aliases": [ 00:05:37.868 "d67c4d83-a751-4ac5-8a4c-b2668713576d" 00:05:37.868 ], 00:05:37.868 "product_name": "Malloc disk", 00:05:37.868 "block_size": 4096, 00:05:37.868 "num_blocks": 256, 00:05:37.868 "uuid": "d67c4d83-a751-4ac5-8a4c-b2668713576d", 00:05:37.868 "assigned_rate_limits": { 00:05:37.868 "rw_ios_per_sec": 0, 00:05:37.868 "rw_mbytes_per_sec": 0, 00:05:37.868 "r_mbytes_per_sec": 0, 00:05:37.868 "w_mbytes_per_sec": 0 00:05:37.868 }, 00:05:37.868 "claimed": false, 00:05:37.868 "zoned": false, 00:05:37.868 "supported_io_types": { 00:05:37.868 "read": true, 00:05:37.868 "write": true, 00:05:37.868 "unmap": true, 00:05:37.868 "flush": true, 00:05:37.868 "reset": true, 00:05:37.868 "nvme_admin": false, 00:05:37.868 "nvme_io": false, 00:05:37.868 "nvme_io_md": false, 00:05:37.868 "write_zeroes": true, 00:05:37.868 "zcopy": true, 00:05:37.868 "get_zone_info": false, 00:05:37.868 "zone_management": false, 00:05:37.868 "zone_append": false, 00:05:37.868 "compare": false, 00:05:37.868 "compare_and_write": false, 00:05:37.868 "abort": true, 00:05:37.868 "seek_hole": false, 00:05:37.868 "seek_data": false, 00:05:37.868 "copy": true, 00:05:37.868 "nvme_iov_md": false 00:05:37.868 }, 00:05:37.868 "memory_domains": [ 00:05:37.868 { 00:05:37.868 "dma_device_id": "system", 00:05:37.868 "dma_device_type": 1 00:05:37.868 }, 00:05:37.868 { 00:05:37.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.868 "dma_device_type": 2 00:05:37.868 } 00:05:37.868 ], 00:05:37.868 "driver_specific": {} 00:05:37.868 } 00:05:37.868 ]' 00:05:37.868 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:37.868 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:37.868 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.868 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.868 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:37.868 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:37.868 19:09:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:37.868 00:05:37.868 real 0m0.145s 00:05:37.868 user 0m0.086s 00:05:37.868 sys 0m0.021s 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.868 19:09:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.868 ************************************ 00:05:37.868 END TEST rpc_plugins 00:05:37.868 ************************************ 00:05:37.868 19:09:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:37.868 19:09:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:37.868 19:09:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.868 19:09:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.868 19:09:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.868 ************************************ 00:05:37.868 START TEST rpc_trace_cmd_test 00:05:37.868 ************************************ 00:05:37.868 19:09:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:37.868 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:37.868 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:37.868 19:09:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.868 19:09:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.868 19:09:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.868 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:37.868 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2662280", 00:05:37.868 "tpoint_group_mask": "0x8", 00:05:37.868 "iscsi_conn": { 00:05:37.868 "mask": "0x2", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "scsi": { 00:05:37.868 "mask": "0x4", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "bdev": { 00:05:37.868 "mask": "0x8", 00:05:37.868 "tpoint_mask": "0xffffffffffffffff" 00:05:37.868 }, 00:05:37.868 "nvmf_rdma": { 00:05:37.868 "mask": "0x10", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "nvmf_tcp": { 00:05:37.868 "mask": "0x20", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "ftl": { 00:05:37.868 "mask": "0x40", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "blobfs": { 00:05:37.868 "mask": "0x80", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "dsa": { 00:05:37.868 "mask": "0x200", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "thread": { 00:05:37.868 "mask": "0x400", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "nvme_pcie": { 00:05:37.868 "mask": "0x800", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "iaa": { 00:05:37.868 "mask": "0x1000", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "nvme_tcp": { 00:05:37.868 "mask": "0x2000", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "bdev_nvme": { 00:05:37.868 "mask": "0x4000", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 }, 00:05:37.868 "sock": { 00:05:37.868 "mask": "0x8000", 00:05:37.868 "tpoint_mask": "0x0" 00:05:37.868 } 00:05:37.868 }' 00:05:38.129 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:38.129 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:38.129 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.129 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.129 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.129 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.129 19:09:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.129 19:09:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.129 19:09:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.129 19:09:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.129 00:05:38.129 real 0m0.247s 00:05:38.129 user 0m0.209s 00:05:38.129 sys 0m0.031s 00:05:38.129 19:09:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.129 19:09:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.129 ************************************ 00:05:38.129 END TEST rpc_trace_cmd_test 00:05:38.129 ************************************ 00:05:38.391 19:09:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.391 19:09:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.391 19:09:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.391 19:09:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.391 19:09:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.391 19:09:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.391 19:09:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.391 ************************************ 00:05:38.391 START TEST rpc_daemon_integrity 00:05:38.391 ************************************ 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.391 { 00:05:38.391 "name": "Malloc2", 00:05:38.391 "aliases": [ 00:05:38.391 "57a787a5-5a41-460b-a3a9-b3e3241055b3" 00:05:38.391 ], 00:05:38.391 "product_name": "Malloc disk", 00:05:38.391 "block_size": 512, 00:05:38.391 "num_blocks": 16384, 00:05:38.391 "uuid": "57a787a5-5a41-460b-a3a9-b3e3241055b3", 00:05:38.391 "assigned_rate_limits": { 00:05:38.391 "rw_ios_per_sec": 0, 00:05:38.391 "rw_mbytes_per_sec": 0, 00:05:38.391 "r_mbytes_per_sec": 0, 00:05:38.391 "w_mbytes_per_sec": 0 00:05:38.391 }, 00:05:38.391 "claimed": false, 00:05:38.391 "zoned": false, 00:05:38.391 "supported_io_types": { 00:05:38.391 "read": true, 00:05:38.391 "write": true, 00:05:38.391 "unmap": true, 00:05:38.391 "flush": true, 00:05:38.391 "reset": true, 00:05:38.391 "nvme_admin": false, 00:05:38.391 "nvme_io": false, 00:05:38.391 "nvme_io_md": false, 00:05:38.391 "write_zeroes": true, 00:05:38.391 "zcopy": true, 00:05:38.391 "get_zone_info": false, 00:05:38.391 "zone_management": false, 00:05:38.391 "zone_append": false, 00:05:38.391 "compare": false, 00:05:38.391 "compare_and_write": false, 00:05:38.391 "abort": true, 00:05:38.391 "seek_hole": false, 00:05:38.391 "seek_data": false, 00:05:38.391 "copy": true, 00:05:38.391 "nvme_iov_md": false 00:05:38.391 }, 00:05:38.391 "memory_domains": [ 00:05:38.391 { 00:05:38.391 "dma_device_id": "system", 00:05:38.391 "dma_device_type": 1 00:05:38.391 }, 00:05:38.391 { 00:05:38.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.391 "dma_device_type": 2 00:05:38.391 } 00:05:38.391 ], 00:05:38.391 "driver_specific": {} 00:05:38.391 } 00:05:38.391 ]' 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.391 [2024-07-22 19:09:57.269047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.391 [2024-07-22 19:09:57.269095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.391 [2024-07-22 19:09:57.269115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001e080 00:05:38.391 [2024-07-22 19:09:57.269127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.391 [2024-07-22 19:09:57.271217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.391 [2024-07-22 19:09:57.271244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.391 Passthru0 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.391 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.392 { 00:05:38.392 "name": "Malloc2", 00:05:38.392 "aliases": [ 00:05:38.392 "57a787a5-5a41-460b-a3a9-b3e3241055b3" 00:05:38.392 ], 00:05:38.392 "product_name": "Malloc disk", 00:05:38.392 "block_size": 512, 00:05:38.392 "num_blocks": 16384, 00:05:38.392 "uuid": "57a787a5-5a41-460b-a3a9-b3e3241055b3", 00:05:38.392 "assigned_rate_limits": { 00:05:38.392 "rw_ios_per_sec": 0, 00:05:38.392 "rw_mbytes_per_sec": 0, 00:05:38.392 "r_mbytes_per_sec": 0, 00:05:38.392 "w_mbytes_per_sec": 0 00:05:38.392 }, 00:05:38.392 "claimed": true, 00:05:38.392 "claim_type": "exclusive_write", 00:05:38.392 "zoned": false, 00:05:38.392 "supported_io_types": { 00:05:38.392 "read": true, 00:05:38.392 "write": true, 00:05:38.392 "unmap": true, 00:05:38.392 "flush": true, 00:05:38.392 "reset": true, 00:05:38.392 "nvme_admin": false, 00:05:38.392 "nvme_io": false, 00:05:38.392 "nvme_io_md": false, 00:05:38.392 "write_zeroes": true, 00:05:38.392 "zcopy": true, 00:05:38.392 "get_zone_info": false, 00:05:38.392 "zone_management": false, 00:05:38.392 "zone_append": false, 00:05:38.392 "compare": false, 00:05:38.392 "compare_and_write": false, 00:05:38.392 "abort": true, 00:05:38.392 "seek_hole": false, 00:05:38.392 "seek_data": false, 00:05:38.392 "copy": true, 00:05:38.392 "nvme_iov_md": false 00:05:38.392 }, 00:05:38.392 "memory_domains": [ 00:05:38.392 { 00:05:38.392 "dma_device_id": "system", 00:05:38.392 "dma_device_type": 1 00:05:38.392 }, 00:05:38.392 { 00:05:38.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.392 "dma_device_type": 2 00:05:38.392 } 00:05:38.392 ], 00:05:38.392 "driver_specific": {} 00:05:38.392 }, 00:05:38.392 { 00:05:38.392 "name": "Passthru0", 00:05:38.392 "aliases": [ 00:05:38.392 "abd4011c-0084-5bbf-b5db-2b2e74e84174" 00:05:38.392 ], 00:05:38.392 "product_name": "passthru", 00:05:38.392 "block_size": 512, 00:05:38.392 "num_blocks": 16384, 00:05:38.392 "uuid": "abd4011c-0084-5bbf-b5db-2b2e74e84174", 00:05:38.392 "assigned_rate_limits": { 00:05:38.392 "rw_ios_per_sec": 0, 00:05:38.392 "rw_mbytes_per_sec": 0, 00:05:38.392 "r_mbytes_per_sec": 0, 00:05:38.392 "w_mbytes_per_sec": 0 00:05:38.392 }, 00:05:38.392 "claimed": false, 00:05:38.392 "zoned": false, 00:05:38.392 "supported_io_types": { 00:05:38.392 "read": true, 00:05:38.392 "write": true, 00:05:38.392 "unmap": true, 00:05:38.392 "flush": true, 00:05:38.392 "reset": true, 00:05:38.392 "nvme_admin": false, 00:05:38.392 "nvme_io": false, 00:05:38.392 "nvme_io_md": false, 00:05:38.392 "write_zeroes": true, 00:05:38.392 "zcopy": true, 00:05:38.392 "get_zone_info": false, 00:05:38.392 "zone_management": false, 00:05:38.392 "zone_append": false, 00:05:38.392 "compare": false, 00:05:38.392 "compare_and_write": false, 00:05:38.392 "abort": true, 00:05:38.392 "seek_hole": false, 00:05:38.392 "seek_data": false, 00:05:38.392 "copy": true, 00:05:38.392 "nvme_iov_md": false 00:05:38.392 }, 00:05:38.392 "memory_domains": [ 00:05:38.392 { 00:05:38.392 "dma_device_id": "system", 00:05:38.392 "dma_device_type": 1 00:05:38.392 }, 00:05:38.392 { 00:05:38.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.392 "dma_device_type": 2 00:05:38.392 } 00:05:38.392 ], 00:05:38.392 "driver_specific": { 00:05:38.392 "passthru": { 00:05:38.392 "name": "Passthru0", 00:05:38.392 "base_bdev_name": "Malloc2" 00:05:38.392 } 00:05:38.392 } 00:05:38.392 } 00:05:38.392 ]' 00:05:38.392 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.653 00:05:38.653 real 0m0.314s 00:05:38.653 user 0m0.190s 00:05:38.653 sys 0m0.042s 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.653 19:09:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.653 ************************************ 00:05:38.653 END TEST rpc_daemon_integrity 00:05:38.653 ************************************ 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.653 19:09:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.653 19:09:57 rpc -- rpc/rpc.sh@84 -- # killprocess 2662280 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@948 -- # '[' -z 2662280 ']' 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@952 -- # kill -0 2662280 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@953 -- # uname 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2662280 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2662280' 00:05:38.653 killing process with pid 2662280 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@967 -- # kill 2662280 00:05:38.653 19:09:57 rpc -- common/autotest_common.sh@972 -- # wait 2662280 00:05:40.567 00:05:40.567 real 0m4.069s 00:05:40.567 user 0m4.681s 00:05:40.567 sys 0m0.865s 00:05:40.567 19:09:59 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.567 19:09:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.567 ************************************ 00:05:40.567 END TEST rpc 00:05:40.567 ************************************ 00:05:40.567 19:09:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.567 19:09:59 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:40.567 19:09:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.567 19:09:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.567 19:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:40.567 ************************************ 00:05:40.567 START TEST skip_rpc 00:05:40.567 ************************************ 00:05:40.567 19:09:59 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:40.567 * Looking for test storage... 00:05:40.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:40.567 19:09:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.567 19:09:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:40.567 19:09:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:40.567 19:09:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.567 19:09:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.567 19:09:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.567 ************************************ 00:05:40.567 START TEST skip_rpc 00:05:40.567 ************************************ 00:05:40.567 19:09:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:40.567 19:09:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2663225 00:05:40.567 19:09:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.567 19:09:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:40.567 19:09:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:40.567 [2024-07-22 19:09:59.469386] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:40.567 [2024-07-22 19:09:59.469494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663225 ] 00:05:40.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.828 [2024-07-22 19:09:59.582010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.828 [2024-07-22 19:09:59.756464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2663225 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2663225 ']' 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2663225 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2663225 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2663225' 00:05:46.188 killing process with pid 2663225 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2663225 00:05:46.188 19:10:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2663225 00:05:47.127 00:05:47.127 real 0m6.692s 00:05:47.127 user 0m6.356s 00:05:47.127 sys 0m0.354s 00:05:47.127 19:10:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.127 19:10:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.127 ************************************ 00:05:47.127 END TEST skip_rpc 00:05:47.127 ************************************ 00:05:47.387 19:10:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:47.387 19:10:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:47.387 19:10:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.387 19:10:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.387 19:10:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.387 ************************************ 00:05:47.387 START TEST skip_rpc_with_json 00:05:47.387 ************************************ 00:05:47.387 19:10:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:47.387 19:10:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:47.387 19:10:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2664602 00:05:47.387 19:10:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.387 19:10:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2664602 00:05:47.387 19:10:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.387 19:10:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2664602 ']' 00:05:47.387 19:10:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.388 19:10:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.388 19:10:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.388 19:10:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.388 19:10:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.388 [2024-07-22 19:10:06.229008] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:47.388 [2024-07-22 19:10:06.229113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664602 ] 00:05:47.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.388 [2024-07-22 19:10:06.341003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.647 [2024-07-22 19:10:06.517215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.217 [2024-07-22 19:10:07.107053] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:48.217 request: 00:05:48.217 { 00:05:48.217 "trtype": "tcp", 00:05:48.217 "method": "nvmf_get_transports", 00:05:48.217 "req_id": 1 00:05:48.217 } 00:05:48.217 Got JSON-RPC error response 00:05:48.217 response: 00:05:48.217 { 00:05:48.217 "code": -19, 00:05:48.217 "message": "No such device" 00:05:48.217 } 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.217 [2024-07-22 19:10:07.119180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.217 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.478 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.478 19:10:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:48.478 { 00:05:48.478 "subsystems": [ 00:05:48.478 { 00:05:48.478 "subsystem": "keyring", 00:05:48.478 "config": [] 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "subsystem": "iobuf", 00:05:48.478 "config": [ 00:05:48.478 { 00:05:48.478 "method": "iobuf_set_options", 00:05:48.478 "params": { 00:05:48.478 "small_pool_count": 8192, 00:05:48.478 "large_pool_count": 1024, 00:05:48.478 "small_bufsize": 8192, 00:05:48.478 "large_bufsize": 135168 00:05:48.478 } 00:05:48.478 } 00:05:48.478 ] 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "subsystem": "sock", 00:05:48.478 "config": [ 00:05:48.478 { 00:05:48.478 "method": "sock_set_default_impl", 00:05:48.478 "params": { 00:05:48.478 "impl_name": "posix" 00:05:48.478 } 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "method": "sock_impl_set_options", 00:05:48.478 "params": { 00:05:48.478 "impl_name": "ssl", 00:05:48.478 "recv_buf_size": 4096, 00:05:48.478 "send_buf_size": 4096, 00:05:48.478 "enable_recv_pipe": true, 00:05:48.478 "enable_quickack": false, 00:05:48.478 "enable_placement_id": 0, 00:05:48.478 "enable_zerocopy_send_server": true, 00:05:48.478 "enable_zerocopy_send_client": false, 00:05:48.478 "zerocopy_threshold": 0, 00:05:48.478 "tls_version": 0, 00:05:48.478 "enable_ktls": false 00:05:48.478 } 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "method": "sock_impl_set_options", 00:05:48.478 "params": { 00:05:48.478 "impl_name": "posix", 00:05:48.478 "recv_buf_size": 2097152, 00:05:48.478 "send_buf_size": 2097152, 00:05:48.478 "enable_recv_pipe": true, 00:05:48.478 "enable_quickack": false, 00:05:48.478 "enable_placement_id": 0, 00:05:48.478 "enable_zerocopy_send_server": true, 00:05:48.478 "enable_zerocopy_send_client": false, 00:05:48.478 "zerocopy_threshold": 0, 00:05:48.478 "tls_version": 0, 00:05:48.478 "enable_ktls": false 00:05:48.478 } 00:05:48.478 } 00:05:48.478 ] 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "subsystem": "vmd", 00:05:48.478 "config": [] 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "subsystem": "accel", 00:05:48.478 "config": [ 00:05:48.478 { 00:05:48.478 "method": "accel_set_options", 00:05:48.478 "params": { 00:05:48.478 "small_cache_size": 128, 00:05:48.478 "large_cache_size": 16, 00:05:48.478 "task_count": 2048, 00:05:48.478 "sequence_count": 2048, 00:05:48.478 "buf_count": 2048 00:05:48.478 } 00:05:48.478 } 00:05:48.478 ] 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "subsystem": "bdev", 00:05:48.478 "config": [ 00:05:48.478 { 00:05:48.478 "method": "bdev_set_options", 00:05:48.478 "params": { 00:05:48.478 "bdev_io_pool_size": 65535, 00:05:48.478 "bdev_io_cache_size": 256, 00:05:48.478 "bdev_auto_examine": true, 00:05:48.478 "iobuf_small_cache_size": 128, 00:05:48.478 "iobuf_large_cache_size": 16 00:05:48.478 } 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "method": "bdev_raid_set_options", 00:05:48.478 "params": { 00:05:48.478 "process_window_size_kb": 1024, 00:05:48.478 "process_max_bandwidth_mb_sec": 0 00:05:48.478 } 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "method": "bdev_iscsi_set_options", 00:05:48.478 "params": { 00:05:48.478 "timeout_sec": 30 00:05:48.478 } 00:05:48.478 }, 00:05:48.478 { 00:05:48.478 "method": "bdev_nvme_set_options", 00:05:48.478 "params": { 00:05:48.478 "action_on_timeout": "none", 00:05:48.478 "timeout_us": 0, 00:05:48.478 "timeout_admin_us": 0, 00:05:48.478 "keep_alive_timeout_ms": 10000, 00:05:48.478 "arbitration_burst": 0, 00:05:48.479 "low_priority_weight": 0, 00:05:48.479 "medium_priority_weight": 0, 00:05:48.479 "high_priority_weight": 0, 00:05:48.479 "nvme_adminq_poll_period_us": 10000, 00:05:48.479 "nvme_ioq_poll_period_us": 0, 00:05:48.479 "io_queue_requests": 0, 00:05:48.479 "delay_cmd_submit": true, 00:05:48.479 "transport_retry_count": 4, 00:05:48.479 "bdev_retry_count": 3, 00:05:48.479 "transport_ack_timeout": 0, 00:05:48.479 "ctrlr_loss_timeout_sec": 0, 00:05:48.479 "reconnect_delay_sec": 0, 00:05:48.479 "fast_io_fail_timeout_sec": 0, 00:05:48.479 "disable_auto_failback": false, 00:05:48.479 "generate_uuids": false, 00:05:48.479 "transport_tos": 0, 00:05:48.479 "nvme_error_stat": false, 00:05:48.479 "rdma_srq_size": 0, 00:05:48.479 "io_path_stat": false, 00:05:48.479 "allow_accel_sequence": false, 00:05:48.479 "rdma_max_cq_size": 0, 00:05:48.479 "rdma_cm_event_timeout_ms": 0, 00:05:48.479 "dhchap_digests": [ 00:05:48.479 "sha256", 00:05:48.479 "sha384", 00:05:48.479 "sha512" 00:05:48.479 ], 00:05:48.479 "dhchap_dhgroups": [ 00:05:48.479 "null", 00:05:48.479 "ffdhe2048", 00:05:48.479 "ffdhe3072", 00:05:48.479 "ffdhe4096", 00:05:48.479 "ffdhe6144", 00:05:48.479 "ffdhe8192" 00:05:48.479 ] 00:05:48.479 } 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "method": "bdev_nvme_set_hotplug", 00:05:48.479 "params": { 00:05:48.479 "period_us": 100000, 00:05:48.479 "enable": false 00:05:48.479 } 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "method": "bdev_wait_for_examine" 00:05:48.479 } 00:05:48.479 ] 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "subsystem": "scsi", 00:05:48.479 "config": null 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "subsystem": "scheduler", 00:05:48.479 "config": [ 00:05:48.479 { 00:05:48.479 "method": "framework_set_scheduler", 00:05:48.479 "params": { 00:05:48.479 "name": "static" 00:05:48.479 } 00:05:48.479 } 00:05:48.479 ] 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "subsystem": "vhost_scsi", 00:05:48.479 "config": [] 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "subsystem": "vhost_blk", 00:05:48.479 "config": [] 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "subsystem": "ublk", 00:05:48.479 "config": [] 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "subsystem": "nbd", 00:05:48.479 "config": [] 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "subsystem": "nvmf", 00:05:48.479 "config": [ 00:05:48.479 { 00:05:48.479 "method": "nvmf_set_config", 00:05:48.479 "params": { 00:05:48.479 "discovery_filter": "match_any", 00:05:48.479 "admin_cmd_passthru": { 00:05:48.479 "identify_ctrlr": false 00:05:48.479 } 00:05:48.479 } 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "method": "nvmf_set_max_subsystems", 00:05:48.479 "params": { 00:05:48.479 "max_subsystems": 1024 00:05:48.479 } 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "method": "nvmf_set_crdt", 00:05:48.479 "params": { 00:05:48.479 "crdt1": 0, 00:05:48.479 "crdt2": 0, 00:05:48.479 "crdt3": 0 00:05:48.479 } 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "method": "nvmf_create_transport", 00:05:48.479 "params": { 00:05:48.479 "trtype": "TCP", 00:05:48.479 "max_queue_depth": 128, 00:05:48.479 "max_io_qpairs_per_ctrlr": 127, 00:05:48.479 "in_capsule_data_size": 4096, 00:05:48.479 "max_io_size": 131072, 00:05:48.479 "io_unit_size": 131072, 00:05:48.479 "max_aq_depth": 128, 00:05:48.479 "num_shared_buffers": 511, 00:05:48.479 "buf_cache_size": 4294967295, 00:05:48.479 "dif_insert_or_strip": false, 00:05:48.479 "zcopy": false, 00:05:48.479 "c2h_success": true, 00:05:48.479 "sock_priority": 0, 00:05:48.479 "abort_timeout_sec": 1, 00:05:48.479 "ack_timeout": 0, 00:05:48.479 "data_wr_pool_size": 0 00:05:48.479 } 00:05:48.479 } 00:05:48.479 ] 00:05:48.479 }, 00:05:48.479 { 00:05:48.479 "subsystem": "iscsi", 00:05:48.479 "config": [ 00:05:48.479 { 00:05:48.479 "method": "iscsi_set_options", 00:05:48.479 "params": { 00:05:48.479 "node_base": "iqn.2016-06.io.spdk", 00:05:48.479 "max_sessions": 128, 00:05:48.479 "max_connections_per_session": 2, 00:05:48.479 "max_queue_depth": 64, 00:05:48.479 "default_time2wait": 2, 00:05:48.479 "default_time2retain": 20, 00:05:48.479 "first_burst_length": 8192, 00:05:48.479 "immediate_data": true, 00:05:48.479 "allow_duplicated_isid": false, 00:05:48.479 "error_recovery_level": 0, 00:05:48.479 "nop_timeout": 60, 00:05:48.479 "nop_in_interval": 30, 00:05:48.479 "disable_chap": false, 00:05:48.479 "require_chap": false, 00:05:48.479 "mutual_chap": false, 00:05:48.479 "chap_group": 0, 00:05:48.479 "max_large_datain_per_connection": 64, 00:05:48.479 "max_r2t_per_connection": 4, 00:05:48.479 "pdu_pool_size": 36864, 00:05:48.479 "immediate_data_pool_size": 16384, 00:05:48.479 "data_out_pool_size": 2048 00:05:48.479 } 00:05:48.479 } 00:05:48.479 ] 00:05:48.479 } 00:05:48.479 ] 00:05:48.479 } 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2664602 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2664602 ']' 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2664602 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2664602 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2664602' 00:05:48.479 killing process with pid 2664602 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2664602 00:05:48.479 19:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2664602 00:05:50.392 19:10:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2665268 00:05:50.392 19:10:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:50.392 19:10:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:55.679 19:10:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2665268 00:05:55.679 19:10:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2665268 ']' 00:05:55.679 19:10:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2665268 00:05:55.679 19:10:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:55.679 19:10:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.679 19:10:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2665268 00:05:55.679 19:10:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.679 19:10:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.679 19:10:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2665268' 00:05:55.679 killing process with pid 2665268 00:05:55.679 19:10:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2665268 00:05:55.679 19:10:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2665268 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:57.065 00:05:57.065 real 0m9.507s 00:05:57.065 user 0m9.132s 00:05:57.065 sys 0m0.773s 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.065 ************************************ 00:05:57.065 END TEST skip_rpc_with_json 00:05:57.065 ************************************ 00:05:57.065 19:10:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.065 19:10:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:57.065 19:10:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.065 19:10:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.065 19:10:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.065 ************************************ 00:05:57.065 START TEST skip_rpc_with_delay 00:05:57.065 ************************************ 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.065 [2024-07-22 19:10:15.814145] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:57.065 [2024-07-22 19:10:15.814291] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.065 00:05:57.065 real 0m0.153s 00:05:57.065 user 0m0.089s 00:05:57.065 sys 0m0.063s 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.065 19:10:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:57.065 ************************************ 00:05:57.065 END TEST skip_rpc_with_delay 00:05:57.065 ************************************ 00:05:57.065 19:10:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.065 19:10:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:57.065 19:10:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:57.065 19:10:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:57.065 19:10:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.065 19:10:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.065 19:10:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.065 ************************************ 00:05:57.065 START TEST exit_on_failed_rpc_init 00:05:57.065 ************************************ 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2666673 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2666673 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2666673 ']' 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.065 19:10:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.326 [2024-07-22 19:10:16.054773] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:57.326 [2024-07-22 19:10:16.054912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666673 ] 00:05:57.326 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.326 [2024-07-22 19:10:16.179542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.586 [2024-07-22 19:10:16.358984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:58.175 19:10:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.175 [2024-07-22 19:10:17.030787] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:58.175 [2024-07-22 19:10:17.030899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666890 ] 00:05:58.175 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.436 [2024-07-22 19:10:17.157247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.436 [2024-07-22 19:10:17.332367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.436 [2024-07-22 19:10:17.332455] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:58.436 [2024-07-22 19:10:17.332470] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:58.436 [2024-07-22 19:10:17.332481] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2666673 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2666673 ']' 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2666673 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.697 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2666673 00:05:58.957 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.957 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.957 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2666673' 00:05:58.957 killing process with pid 2666673 00:05:58.957 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2666673 00:05:58.957 19:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2666673 00:06:00.871 00:06:00.871 real 0m3.355s 00:06:00.871 user 0m3.801s 00:06:00.871 sys 0m0.593s 00:06:00.871 19:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.871 19:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.871 ************************************ 00:06:00.871 END TEST exit_on_failed_rpc_init 00:06:00.871 ************************************ 00:06:00.871 19:10:19 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:00.871 19:10:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:00.871 00:06:00.871 real 0m20.122s 00:06:00.871 user 0m19.528s 00:06:00.871 sys 0m2.074s 00:06:00.871 19:10:19 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.871 19:10:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.871 ************************************ 00:06:00.871 END TEST skip_rpc 00:06:00.871 ************************************ 00:06:00.871 19:10:19 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.871 19:10:19 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.871 19:10:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.871 19:10:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.871 19:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:00.871 ************************************ 00:06:00.871 START TEST rpc_client 00:06:00.872 ************************************ 00:06:00.872 19:10:19 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.872 * Looking for test storage... 00:06:00.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:00.872 19:10:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:00.872 OK 00:06:00.872 19:10:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:00.872 00:06:00.872 real 0m0.159s 00:06:00.872 user 0m0.068s 00:06:00.872 sys 0m0.097s 00:06:00.872 19:10:19 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.872 19:10:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:00.872 ************************************ 00:06:00.872 END TEST rpc_client 00:06:00.872 ************************************ 00:06:00.872 19:10:19 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.872 19:10:19 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.872 19:10:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.872 19:10:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.872 19:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:00.872 ************************************ 00:06:00.872 START TEST json_config 00:06:00.872 ************************************ 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.872 19:10:19 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.872 19:10:19 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.872 19:10:19 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.872 19:10:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.872 19:10:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.872 19:10:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.872 19:10:19 json_config -- paths/export.sh@5 -- # export PATH 00:06:00.872 19:10:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@47 -- # : 0 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:00.872 19:10:19 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:00.872 INFO: JSON configuration test init 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.872 19:10:19 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:00.872 19:10:19 json_config -- json_config/common.sh@9 -- # local app=target 00:06:00.872 19:10:19 json_config -- json_config/common.sh@10 -- # shift 00:06:00.872 19:10:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:00.872 19:10:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:00.872 19:10:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:00.872 19:10:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.872 19:10:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.872 19:10:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2667470 00:06:00.872 19:10:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:00.872 Waiting for target to run... 00:06:00.872 19:10:19 json_config -- json_config/common.sh@25 -- # waitforlisten 2667470 /var/tmp/spdk_tgt.sock 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@829 -- # '[' -z 2667470 ']' 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.872 19:10:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.872 19:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.134 [2024-07-22 19:10:19.878997] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:01.134 [2024-07-22 19:10:19.879130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667470 ] 00:06:01.134 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.706 [2024-07-22 19:10:20.372303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.706 [2024-07-22 19:10:20.553493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.706 19:10:20 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.706 19:10:20 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:01.706 19:10:20 json_config -- json_config/common.sh@26 -- # echo '' 00:06:01.706 00:06:01.706 19:10:20 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:01.706 19:10:20 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:01.706 19:10:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.706 19:10:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.706 19:10:20 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:01.706 19:10:20 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:01.706 19:10:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.706 19:10:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.967 19:10:20 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:01.967 19:10:20 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:01.967 19:10:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:02.908 19:10:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.908 19:10:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:02.908 19:10:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@51 -- # sort 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:02.908 19:10:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.908 19:10:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:02.908 19:10:21 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:02.908 19:10:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.908 19:10:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.168 19:10:21 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:03.168 19:10:21 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:03.168 19:10:21 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:03.168 19:10:21 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.168 19:10:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.168 MallocForNvmf0 00:06:03.168 19:10:22 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.168 19:10:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.430 MallocForNvmf1 00:06:03.430 19:10:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.430 19:10:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.430 [2024-07-22 19:10:22.341480] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.430 19:10:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:03.430 19:10:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:03.689 19:10:22 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:03.689 19:10:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:03.950 19:10:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:03.950 19:10:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:03.950 19:10:22 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.950 19:10:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:04.211 [2024-07-22 19:10:22.987630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:04.211 19:10:23 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:04.211 19:10:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.211 19:10:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.211 19:10:23 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:04.211 19:10:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.211 19:10:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.211 19:10:23 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:04.211 19:10:23 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.211 19:10:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.471 MallocBdevForConfigChangeCheck 00:06:04.471 19:10:23 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:04.471 19:10:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.471 19:10:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.471 19:10:23 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:04.471 19:10:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.731 19:10:23 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:04.731 INFO: shutting down applications... 00:06:04.731 19:10:23 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:04.731 19:10:23 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:04.731 19:10:23 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:04.731 19:10:23 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:04.991 Calling clear_iscsi_subsystem 00:06:04.991 Calling clear_nvmf_subsystem 00:06:04.991 Calling clear_nbd_subsystem 00:06:04.991 Calling clear_ublk_subsystem 00:06:04.991 Calling clear_vhost_blk_subsystem 00:06:04.991 Calling clear_vhost_scsi_subsystem 00:06:04.991 Calling clear_bdev_subsystem 00:06:05.252 19:10:23 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:05.252 19:10:23 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:05.252 19:10:23 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:05.252 19:10:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.252 19:10:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:05.252 19:10:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:05.513 19:10:24 json_config -- json_config/json_config.sh@349 -- # break 00:06:05.513 19:10:24 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:05.513 19:10:24 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:05.513 19:10:24 json_config -- json_config/common.sh@31 -- # local app=target 00:06:05.513 19:10:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:05.513 19:10:24 json_config -- json_config/common.sh@35 -- # [[ -n 2667470 ]] 00:06:05.513 19:10:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2667470 00:06:05.513 19:10:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:05.513 19:10:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.513 19:10:24 json_config -- json_config/common.sh@41 -- # kill -0 2667470 00:06:05.513 19:10:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.082 19:10:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.082 19:10:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.082 19:10:24 json_config -- json_config/common.sh@41 -- # kill -0 2667470 00:06:06.083 19:10:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.377 19:10:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.377 19:10:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.377 19:10:25 json_config -- json_config/common.sh@41 -- # kill -0 2667470 00:06:06.377 19:10:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.377 19:10:25 json_config -- json_config/common.sh@43 -- # break 00:06:06.377 19:10:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.377 19:10:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.377 SPDK target shutdown done 00:06:06.377 19:10:25 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:06.377 INFO: relaunching applications... 00:06:06.377 19:10:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.377 19:10:25 json_config -- json_config/common.sh@9 -- # local app=target 00:06:06.377 19:10:25 json_config -- json_config/common.sh@10 -- # shift 00:06:06.377 19:10:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.377 19:10:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.377 19:10:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.377 19:10:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.377 19:10:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.377 19:10:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2668789 00:06:06.377 19:10:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.377 Waiting for target to run... 00:06:06.377 19:10:25 json_config -- json_config/common.sh@25 -- # waitforlisten 2668789 /var/tmp/spdk_tgt.sock 00:06:06.377 19:10:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.377 19:10:25 json_config -- common/autotest_common.sh@829 -- # '[' -z 2668789 ']' 00:06:06.377 19:10:25 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.377 19:10:25 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.377 19:10:25 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.377 19:10:25 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.377 19:10:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.637 [2024-07-22 19:10:25.356838] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:06.637 [2024-07-22 19:10:25.356968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668789 ] 00:06:06.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.898 [2024-07-22 19:10:25.745776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.158 [2024-07-22 19:10:25.923162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.098 [2024-07-22 19:10:26.856934] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.098 [2024-07-22 19:10:26.889350] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:08.098 19:10:26 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.098 19:10:26 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:08.098 19:10:26 json_config -- json_config/common.sh@26 -- # echo '' 00:06:08.098 00:06:08.098 19:10:26 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:08.098 19:10:26 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:08.098 INFO: Checking if target configuration is the same... 00:06:08.098 19:10:26 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.098 19:10:26 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:08.098 19:10:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:08.098 + '[' 2 -ne 2 ']' 00:06:08.098 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:08.098 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:08.098 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:08.098 +++ basename /dev/fd/62 00:06:08.098 ++ mktemp /tmp/62.XXX 00:06:08.098 + tmp_file_1=/tmp/62.Jn7 00:06:08.098 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.098 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:08.098 + tmp_file_2=/tmp/spdk_tgt_config.json.eDE 00:06:08.098 + ret=0 00:06:08.098 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.359 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.359 + diff -u /tmp/62.Jn7 /tmp/spdk_tgt_config.json.eDE 00:06:08.359 + echo 'INFO: JSON config files are the same' 00:06:08.359 INFO: JSON config files are the same 00:06:08.359 + rm /tmp/62.Jn7 /tmp/spdk_tgt_config.json.eDE 00:06:08.359 + exit 0 00:06:08.359 19:10:27 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:08.359 19:10:27 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:08.359 INFO: changing configuration and checking if this can be detected... 00:06:08.359 19:10:27 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:08.359 19:10:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:08.620 19:10:27 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:08.620 19:10:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:08.620 19:10:27 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.620 + '[' 2 -ne 2 ']' 00:06:08.620 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:08.620 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:08.620 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:08.620 +++ basename /dev/fd/62 00:06:08.620 ++ mktemp /tmp/62.XXX 00:06:08.620 + tmp_file_1=/tmp/62.orv 00:06:08.620 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.620 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:08.620 + tmp_file_2=/tmp/spdk_tgt_config.json.t9C 00:06:08.620 + ret=0 00:06:08.620 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.881 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.881 + diff -u /tmp/62.orv /tmp/spdk_tgt_config.json.t9C 00:06:08.882 + ret=1 00:06:08.882 + echo '=== Start of file: /tmp/62.orv ===' 00:06:08.882 + cat /tmp/62.orv 00:06:08.882 + echo '=== End of file: /tmp/62.orv ===' 00:06:08.882 + echo '' 00:06:08.882 + echo '=== Start of file: /tmp/spdk_tgt_config.json.t9C ===' 00:06:08.882 + cat /tmp/spdk_tgt_config.json.t9C 00:06:08.882 + echo '=== End of file: /tmp/spdk_tgt_config.json.t9C ===' 00:06:08.882 + echo '' 00:06:08.882 + rm /tmp/62.orv /tmp/spdk_tgt_config.json.t9C 00:06:08.882 + exit 1 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:08.882 INFO: configuration change detected. 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:08.882 19:10:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.882 19:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@321 -- # [[ -n 2668789 ]] 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:08.882 19:10:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.882 19:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:08.882 19:10:27 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:09.143 19:10:27 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:09.143 19:10:27 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:09.143 19:10:27 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:09.143 19:10:27 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.143 19:10:27 json_config -- json_config/json_config.sh@327 -- # killprocess 2668789 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@948 -- # '[' -z 2668789 ']' 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@952 -- # kill -0 2668789 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@953 -- # uname 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2668789 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2668789' 00:06:09.143 killing process with pid 2668789 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@967 -- # kill 2668789 00:06:09.143 19:10:27 json_config -- common/autotest_common.sh@972 -- # wait 2668789 00:06:10.086 19:10:28 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.086 19:10:28 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:10.086 19:10:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:10.086 19:10:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.086 19:10:28 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:10.086 19:10:28 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:10.086 INFO: Success 00:06:10.086 00:06:10.086 real 0m9.140s 00:06:10.086 user 0m10.149s 00:06:10.086 sys 0m2.208s 00:06:10.086 19:10:28 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.086 19:10:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.086 ************************************ 00:06:10.086 END TEST json_config 00:06:10.086 ************************************ 00:06:10.086 19:10:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.086 19:10:28 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.087 19:10:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.087 19:10:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.087 19:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:10.087 ************************************ 00:06:10.087 START TEST json_config_extra_key 00:06:10.087 ************************************ 00:06:10.087 19:10:28 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.087 19:10:28 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.087 19:10:28 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.087 19:10:28 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.087 19:10:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.087 19:10:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.087 19:10:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.087 19:10:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.087 19:10:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.087 19:10:28 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.087 INFO: launching applications... 00:06:10.087 19:10:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2669721 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.087 Waiting for target to run... 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2669721 /var/tmp/spdk_tgt.sock 00:06:10.087 19:10:28 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2669721 ']' 00:06:10.087 19:10:28 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.087 19:10:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.087 19:10:28 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.087 19:10:28 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.087 19:10:28 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.087 19:10:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.348 [2024-07-22 19:10:29.060177] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:10.348 [2024-07-22 19:10:29.060299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669721 ] 00:06:10.349 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.609 [2024-07-22 19:10:29.386994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.870 [2024-07-22 19:10:29.564196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.131 19:10:30 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.131 19:10:30 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:11.131 19:10:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.131 00:06:11.131 19:10:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.131 INFO: shutting down applications... 00:06:11.131 19:10:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.131 19:10:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.131 19:10:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.131 19:10:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2669721 ]] 00:06:11.131 19:10:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2669721 00:06:11.131 19:10:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.131 19:10:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.131 19:10:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2669721 00:06:11.131 19:10:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.703 19:10:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.703 19:10:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.703 19:10:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2669721 00:06:11.703 19:10:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.273 19:10:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.273 19:10:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.273 19:10:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2669721 00:06:12.273 19:10:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.844 19:10:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.844 19:10:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.844 19:10:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2669721 00:06:12.844 19:10:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.416 19:10:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.416 19:10:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.416 19:10:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2669721 00:06:13.416 19:10:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.416 19:10:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:13.416 19:10:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.416 19:10:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.416 SPDK target shutdown done 00:06:13.416 19:10:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:13.416 Success 00:06:13.416 00:06:13.416 real 0m3.203s 00:06:13.416 user 0m2.880s 00:06:13.416 sys 0m0.516s 00:06:13.416 19:10:32 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.416 19:10:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.416 ************************************ 00:06:13.416 END TEST json_config_extra_key 00:06:13.416 ************************************ 00:06:13.416 19:10:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.416 19:10:32 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.416 19:10:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.416 19:10:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.416 19:10:32 -- common/autotest_common.sh@10 -- # set +x 00:06:13.416 ************************************ 00:06:13.416 START TEST alias_rpc 00:06:13.416 ************************************ 00:06:13.416 19:10:32 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.416 * Looking for test storage... 00:06:13.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:13.416 19:10:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.416 19:10:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2670441 00:06:13.416 19:10:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2670441 00:06:13.416 19:10:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.416 19:10:32 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2670441 ']' 00:06:13.416 19:10:32 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.416 19:10:32 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.416 19:10:32 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.416 19:10:32 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.416 19:10:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.416 [2024-07-22 19:10:32.339667] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:13.416 [2024-07-22 19:10:32.339789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670441 ] 00:06:13.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.677 [2024-07-22 19:10:32.461947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.937 [2024-07-22 19:10:32.640478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.511 19:10:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:14.511 19:10:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2670441 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2670441 ']' 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2670441 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2670441 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2670441' 00:06:14.511 killing process with pid 2670441 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@967 -- # kill 2670441 00:06:14.511 19:10:33 alias_rpc -- common/autotest_common.sh@972 -- # wait 2670441 00:06:16.425 00:06:16.425 real 0m2.946s 00:06:16.425 user 0m2.971s 00:06:16.425 sys 0m0.489s 00:06:16.425 19:10:35 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.425 19:10:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.425 ************************************ 00:06:16.425 END TEST alias_rpc 00:06:16.425 ************************************ 00:06:16.425 19:10:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.425 19:10:35 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:16.425 19:10:35 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:16.425 19:10:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.425 19:10:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.425 19:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:16.425 ************************************ 00:06:16.425 START TEST spdkcli_tcp 00:06:16.425 ************************************ 00:06:16.425 19:10:35 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:16.425 * Looking for test storage... 00:06:16.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:16.425 19:10:35 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.425 19:10:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2671006 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2671006 00:06:16.425 19:10:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:16.425 19:10:35 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2671006 ']' 00:06:16.425 19:10:35 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.425 19:10:35 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.425 19:10:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.425 19:10:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.425 19:10:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.425 [2024-07-22 19:10:35.370138] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:16.425 [2024-07-22 19:10:35.370283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671006 ] 00:06:16.686 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.686 [2024-07-22 19:10:35.495406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.946 [2024-07-22 19:10:35.677547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.946 [2024-07-22 19:10:35.677613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.516 19:10:36 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.516 19:10:36 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:17.516 19:10:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2671184 00:06:17.516 19:10:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:17.516 19:10:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.516 [ 00:06:17.516 "bdev_malloc_delete", 00:06:17.516 "bdev_malloc_create", 00:06:17.516 "bdev_null_resize", 00:06:17.516 "bdev_null_delete", 00:06:17.516 "bdev_null_create", 00:06:17.516 "bdev_nvme_cuse_unregister", 00:06:17.516 "bdev_nvme_cuse_register", 00:06:17.516 "bdev_opal_new_user", 00:06:17.516 "bdev_opal_set_lock_state", 00:06:17.516 "bdev_opal_delete", 00:06:17.516 "bdev_opal_get_info", 00:06:17.516 "bdev_opal_create", 00:06:17.516 "bdev_nvme_opal_revert", 00:06:17.516 "bdev_nvme_opal_init", 00:06:17.516 "bdev_nvme_send_cmd", 00:06:17.516 "bdev_nvme_get_path_iostat", 00:06:17.516 "bdev_nvme_get_mdns_discovery_info", 00:06:17.516 "bdev_nvme_stop_mdns_discovery", 00:06:17.517 "bdev_nvme_start_mdns_discovery", 00:06:17.517 "bdev_nvme_set_multipath_policy", 00:06:17.517 "bdev_nvme_set_preferred_path", 00:06:17.517 "bdev_nvme_get_io_paths", 00:06:17.517 "bdev_nvme_remove_error_injection", 00:06:17.517 "bdev_nvme_add_error_injection", 00:06:17.517 "bdev_nvme_get_discovery_info", 00:06:17.517 "bdev_nvme_stop_discovery", 00:06:17.517 "bdev_nvme_start_discovery", 00:06:17.517 "bdev_nvme_get_controller_health_info", 00:06:17.517 "bdev_nvme_disable_controller", 00:06:17.517 "bdev_nvme_enable_controller", 00:06:17.517 "bdev_nvme_reset_controller", 00:06:17.517 "bdev_nvme_get_transport_statistics", 00:06:17.517 "bdev_nvme_apply_firmware", 00:06:17.517 "bdev_nvme_detach_controller", 00:06:17.517 "bdev_nvme_get_controllers", 00:06:17.517 "bdev_nvme_attach_controller", 00:06:17.517 "bdev_nvme_set_hotplug", 00:06:17.517 "bdev_nvme_set_options", 00:06:17.517 "bdev_passthru_delete", 00:06:17.517 "bdev_passthru_create", 00:06:17.517 "bdev_lvol_set_parent_bdev", 00:06:17.517 "bdev_lvol_set_parent", 00:06:17.517 "bdev_lvol_check_shallow_copy", 00:06:17.517 "bdev_lvol_start_shallow_copy", 00:06:17.517 "bdev_lvol_grow_lvstore", 00:06:17.517 "bdev_lvol_get_lvols", 00:06:17.517 "bdev_lvol_get_lvstores", 00:06:17.517 "bdev_lvol_delete", 00:06:17.517 "bdev_lvol_set_read_only", 00:06:17.517 "bdev_lvol_resize", 00:06:17.517 "bdev_lvol_decouple_parent", 00:06:17.517 "bdev_lvol_inflate", 00:06:17.517 "bdev_lvol_rename", 00:06:17.517 "bdev_lvol_clone_bdev", 00:06:17.517 "bdev_lvol_clone", 00:06:17.517 "bdev_lvol_snapshot", 00:06:17.517 "bdev_lvol_create", 00:06:17.517 "bdev_lvol_delete_lvstore", 00:06:17.517 "bdev_lvol_rename_lvstore", 00:06:17.517 "bdev_lvol_create_lvstore", 00:06:17.517 "bdev_raid_set_options", 00:06:17.517 "bdev_raid_remove_base_bdev", 00:06:17.517 "bdev_raid_add_base_bdev", 00:06:17.517 "bdev_raid_delete", 00:06:17.517 "bdev_raid_create", 00:06:17.517 "bdev_raid_get_bdevs", 00:06:17.517 "bdev_error_inject_error", 00:06:17.517 "bdev_error_delete", 00:06:17.517 "bdev_error_create", 00:06:17.517 "bdev_split_delete", 00:06:17.517 "bdev_split_create", 00:06:17.517 "bdev_delay_delete", 00:06:17.517 "bdev_delay_create", 00:06:17.517 "bdev_delay_update_latency", 00:06:17.517 "bdev_zone_block_delete", 00:06:17.517 "bdev_zone_block_create", 00:06:17.517 "blobfs_create", 00:06:17.517 "blobfs_detect", 00:06:17.517 "blobfs_set_cache_size", 00:06:17.517 "bdev_aio_delete", 00:06:17.517 "bdev_aio_rescan", 00:06:17.517 "bdev_aio_create", 00:06:17.517 "bdev_ftl_set_property", 00:06:17.517 "bdev_ftl_get_properties", 00:06:17.517 "bdev_ftl_get_stats", 00:06:17.517 "bdev_ftl_unmap", 00:06:17.517 "bdev_ftl_unload", 00:06:17.517 "bdev_ftl_delete", 00:06:17.517 "bdev_ftl_load", 00:06:17.517 "bdev_ftl_create", 00:06:17.517 "bdev_virtio_attach_controller", 00:06:17.517 "bdev_virtio_scsi_get_devices", 00:06:17.517 "bdev_virtio_detach_controller", 00:06:17.517 "bdev_virtio_blk_set_hotplug", 00:06:17.517 "bdev_iscsi_delete", 00:06:17.517 "bdev_iscsi_create", 00:06:17.517 "bdev_iscsi_set_options", 00:06:17.517 "accel_error_inject_error", 00:06:17.517 "ioat_scan_accel_module", 00:06:17.517 "dsa_scan_accel_module", 00:06:17.517 "iaa_scan_accel_module", 00:06:17.517 "keyring_file_remove_key", 00:06:17.517 "keyring_file_add_key", 00:06:17.517 "keyring_linux_set_options", 00:06:17.517 "iscsi_get_histogram", 00:06:17.517 "iscsi_enable_histogram", 00:06:17.517 "iscsi_set_options", 00:06:17.517 "iscsi_get_auth_groups", 00:06:17.517 "iscsi_auth_group_remove_secret", 00:06:17.517 "iscsi_auth_group_add_secret", 00:06:17.517 "iscsi_delete_auth_group", 00:06:17.517 "iscsi_create_auth_group", 00:06:17.517 "iscsi_set_discovery_auth", 00:06:17.517 "iscsi_get_options", 00:06:17.517 "iscsi_target_node_request_logout", 00:06:17.517 "iscsi_target_node_set_redirect", 00:06:17.517 "iscsi_target_node_set_auth", 00:06:17.517 "iscsi_target_node_add_lun", 00:06:17.517 "iscsi_get_stats", 00:06:17.517 "iscsi_get_connections", 00:06:17.517 "iscsi_portal_group_set_auth", 00:06:17.517 "iscsi_start_portal_group", 00:06:17.517 "iscsi_delete_portal_group", 00:06:17.517 "iscsi_create_portal_group", 00:06:17.517 "iscsi_get_portal_groups", 00:06:17.517 "iscsi_delete_target_node", 00:06:17.517 "iscsi_target_node_remove_pg_ig_maps", 00:06:17.517 "iscsi_target_node_add_pg_ig_maps", 00:06:17.517 "iscsi_create_target_node", 00:06:17.517 "iscsi_get_target_nodes", 00:06:17.517 "iscsi_delete_initiator_group", 00:06:17.517 "iscsi_initiator_group_remove_initiators", 00:06:17.517 "iscsi_initiator_group_add_initiators", 00:06:17.517 "iscsi_create_initiator_group", 00:06:17.517 "iscsi_get_initiator_groups", 00:06:17.517 "nvmf_set_crdt", 00:06:17.517 "nvmf_set_config", 00:06:17.517 "nvmf_set_max_subsystems", 00:06:17.517 "nvmf_stop_mdns_prr", 00:06:17.517 "nvmf_publish_mdns_prr", 00:06:17.517 "nvmf_subsystem_get_listeners", 00:06:17.517 "nvmf_subsystem_get_qpairs", 00:06:17.517 "nvmf_subsystem_get_controllers", 00:06:17.517 "nvmf_get_stats", 00:06:17.517 "nvmf_get_transports", 00:06:17.517 "nvmf_create_transport", 00:06:17.517 "nvmf_get_targets", 00:06:17.517 "nvmf_delete_target", 00:06:17.517 "nvmf_create_target", 00:06:17.517 "nvmf_subsystem_allow_any_host", 00:06:17.517 "nvmf_subsystem_remove_host", 00:06:17.517 "nvmf_subsystem_add_host", 00:06:17.517 "nvmf_ns_remove_host", 00:06:17.517 "nvmf_ns_add_host", 00:06:17.517 "nvmf_subsystem_remove_ns", 00:06:17.517 "nvmf_subsystem_add_ns", 00:06:17.517 "nvmf_subsystem_listener_set_ana_state", 00:06:17.517 "nvmf_discovery_get_referrals", 00:06:17.517 "nvmf_discovery_remove_referral", 00:06:17.517 "nvmf_discovery_add_referral", 00:06:17.517 "nvmf_subsystem_remove_listener", 00:06:17.517 "nvmf_subsystem_add_listener", 00:06:17.517 "nvmf_delete_subsystem", 00:06:17.517 "nvmf_create_subsystem", 00:06:17.517 "nvmf_get_subsystems", 00:06:17.517 "env_dpdk_get_mem_stats", 00:06:17.517 "nbd_get_disks", 00:06:17.517 "nbd_stop_disk", 00:06:17.517 "nbd_start_disk", 00:06:17.517 "ublk_recover_disk", 00:06:17.517 "ublk_get_disks", 00:06:17.517 "ublk_stop_disk", 00:06:17.517 "ublk_start_disk", 00:06:17.517 "ublk_destroy_target", 00:06:17.517 "ublk_create_target", 00:06:17.517 "virtio_blk_create_transport", 00:06:17.517 "virtio_blk_get_transports", 00:06:17.517 "vhost_controller_set_coalescing", 00:06:17.517 "vhost_get_controllers", 00:06:17.517 "vhost_delete_controller", 00:06:17.517 "vhost_create_blk_controller", 00:06:17.517 "vhost_scsi_controller_remove_target", 00:06:17.517 "vhost_scsi_controller_add_target", 00:06:17.517 "vhost_start_scsi_controller", 00:06:17.517 "vhost_create_scsi_controller", 00:06:17.517 "thread_set_cpumask", 00:06:17.517 "framework_get_governor", 00:06:17.517 "framework_get_scheduler", 00:06:17.517 "framework_set_scheduler", 00:06:17.517 "framework_get_reactors", 00:06:17.517 "thread_get_io_channels", 00:06:17.517 "thread_get_pollers", 00:06:17.517 "thread_get_stats", 00:06:17.517 "framework_monitor_context_switch", 00:06:17.517 "spdk_kill_instance", 00:06:17.517 "log_enable_timestamps", 00:06:17.517 "log_get_flags", 00:06:17.517 "log_clear_flag", 00:06:17.517 "log_set_flag", 00:06:17.517 "log_get_level", 00:06:17.517 "log_set_level", 00:06:17.517 "log_get_print_level", 00:06:17.517 "log_set_print_level", 00:06:17.517 "framework_enable_cpumask_locks", 00:06:17.517 "framework_disable_cpumask_locks", 00:06:17.517 "framework_wait_init", 00:06:17.517 "framework_start_init", 00:06:17.517 "scsi_get_devices", 00:06:17.517 "bdev_get_histogram", 00:06:17.517 "bdev_enable_histogram", 00:06:17.517 "bdev_set_qos_limit", 00:06:17.517 "bdev_set_qd_sampling_period", 00:06:17.517 "bdev_get_bdevs", 00:06:17.517 "bdev_reset_iostat", 00:06:17.517 "bdev_get_iostat", 00:06:17.517 "bdev_examine", 00:06:17.517 "bdev_wait_for_examine", 00:06:17.517 "bdev_set_options", 00:06:17.517 "notify_get_notifications", 00:06:17.517 "notify_get_types", 00:06:17.517 "accel_get_stats", 00:06:17.517 "accel_set_options", 00:06:17.517 "accel_set_driver", 00:06:17.517 "accel_crypto_key_destroy", 00:06:17.517 "accel_crypto_keys_get", 00:06:17.517 "accel_crypto_key_create", 00:06:17.517 "accel_assign_opc", 00:06:17.517 "accel_get_module_info", 00:06:17.517 "accel_get_opc_assignments", 00:06:17.517 "vmd_rescan", 00:06:17.517 "vmd_remove_device", 00:06:17.517 "vmd_enable", 00:06:17.517 "sock_get_default_impl", 00:06:17.517 "sock_set_default_impl", 00:06:17.517 "sock_impl_set_options", 00:06:17.517 "sock_impl_get_options", 00:06:17.517 "iobuf_get_stats", 00:06:17.517 "iobuf_set_options", 00:06:17.517 "framework_get_pci_devices", 00:06:17.517 "framework_get_config", 00:06:17.517 "framework_get_subsystems", 00:06:17.517 "trace_get_info", 00:06:17.517 "trace_get_tpoint_group_mask", 00:06:17.517 "trace_disable_tpoint_group", 00:06:17.517 "trace_enable_tpoint_group", 00:06:17.517 "trace_clear_tpoint_mask", 00:06:17.517 "trace_set_tpoint_mask", 00:06:17.517 "keyring_get_keys", 00:06:17.517 "spdk_get_version", 00:06:17.517 "rpc_get_methods" 00:06:17.517 ] 00:06:17.517 19:10:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:17.517 19:10:36 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.517 19:10:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.518 19:10:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:17.518 19:10:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2671006 00:06:17.518 19:10:36 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2671006 ']' 00:06:17.518 19:10:36 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2671006 00:06:17.518 19:10:36 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:17.518 19:10:36 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.518 19:10:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671006 00:06:17.778 19:10:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.778 19:10:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.778 19:10:36 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671006' 00:06:17.778 killing process with pid 2671006 00:06:17.778 19:10:36 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2671006 00:06:17.778 19:10:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2671006 00:06:19.693 00:06:19.693 real 0m2.961s 00:06:19.693 user 0m5.104s 00:06:19.693 sys 0m0.537s 00:06:19.693 19:10:38 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.693 19:10:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.693 ************************************ 00:06:19.693 END TEST spdkcli_tcp 00:06:19.693 ************************************ 00:06:19.693 19:10:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.693 19:10:38 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.693 19:10:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.693 19:10:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.693 19:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:19.693 ************************************ 00:06:19.693 START TEST dpdk_mem_utility 00:06:19.693 ************************************ 00:06:19.693 19:10:38 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.693 * Looking for test storage... 00:06:19.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:19.693 19:10:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.693 19:10:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2671593 00:06:19.693 19:10:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2671593 00:06:19.693 19:10:38 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2671593 ']' 00:06:19.694 19:10:38 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.694 19:10:38 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.694 19:10:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.694 19:10:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.694 19:10:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.694 19:10:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.694 [2024-07-22 19:10:38.356564] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:19.694 [2024-07-22 19:10:38.356707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671593 ] 00:06:19.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.694 [2024-07-22 19:10:38.481914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.955 [2024-07-22 19:10:38.659896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.528 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.528 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:20.528 19:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:20.528 19:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:20.528 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.528 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.528 { 00:06:20.528 "filename": "/tmp/spdk_mem_dump.txt" 00:06:20.528 } 00:06:20.528 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.528 19:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:20.528 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:20.528 1 heaps totaling size 820.000000 MiB 00:06:20.528 size: 820.000000 MiB heap id: 0 00:06:20.528 end heaps---------- 00:06:20.528 8 mempools totaling size 598.116089 MiB 00:06:20.528 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:20.528 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:20.528 size: 84.521057 MiB name: bdev_io_2671593 00:06:20.528 size: 51.011292 MiB name: evtpool_2671593 00:06:20.528 size: 50.003479 MiB name: msgpool_2671593 00:06:20.528 size: 21.763794 MiB name: PDU_Pool 00:06:20.528 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:20.528 size: 0.026123 MiB name: Session_Pool 00:06:20.528 end mempools------- 00:06:20.528 6 memzones totaling size 4.142822 MiB 00:06:20.528 size: 1.000366 MiB name: RG_ring_0_2671593 00:06:20.528 size: 1.000366 MiB name: RG_ring_1_2671593 00:06:20.528 size: 1.000366 MiB name: RG_ring_4_2671593 00:06:20.528 size: 1.000366 MiB name: RG_ring_5_2671593 00:06:20.528 size: 0.125366 MiB name: RG_ring_2_2671593 00:06:20.528 size: 0.015991 MiB name: RG_ring_3_2671593 00:06:20.528 end memzones------- 00:06:20.528 19:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:20.528 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:20.528 list of free elements. size: 18.514832 MiB 00:06:20.528 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:20.528 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:20.528 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:20.528 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:20.528 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:20.528 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:20.528 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:20.528 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:20.528 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:20.528 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:20.528 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:20.528 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:20.528 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:20.528 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:20.528 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:20.528 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:20.528 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:20.528 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:20.528 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:20.528 list of standard malloc elements. size: 199.220764 MiB 00:06:20.528 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:20.528 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:20.528 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:20.528 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:20.528 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:20.528 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:20.528 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:20.528 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:20.528 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:20.528 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:20.528 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:20.528 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:20.528 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:20.528 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:20.529 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:20.529 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:20.529 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:20.529 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:20.529 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:20.529 list of memzone associated elements. size: 602.264404 MiB 00:06:20.529 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:20.529 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:20.529 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:20.529 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:20.529 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:20.529 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2671593_0 00:06:20.529 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:20.529 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2671593_0 00:06:20.529 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:20.529 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2671593_0 00:06:20.529 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:20.529 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:20.529 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:20.529 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:20.529 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:20.529 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2671593 00:06:20.529 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:20.529 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2671593 00:06:20.529 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:20.529 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2671593 00:06:20.529 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:20.529 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:20.529 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:20.529 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:20.529 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:20.529 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:20.529 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:20.529 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:20.529 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:20.529 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2671593 00:06:20.529 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:20.529 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2671593 00:06:20.529 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:20.529 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2671593 00:06:20.529 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:20.529 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2671593 00:06:20.529 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:20.529 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2671593 00:06:20.529 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:20.529 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:20.529 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:20.529 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:20.529 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:20.529 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:20.529 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:20.529 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2671593 00:06:20.529 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:20.529 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:20.529 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:20.529 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:20.529 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:20.529 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2671593 00:06:20.529 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:20.529 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:20.529 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:20.529 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2671593 00:06:20.529 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:20.529 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2671593 00:06:20.529 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:20.529 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:20.529 19:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:20.529 19:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2671593 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2671593 ']' 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2671593 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671593 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671593' 00:06:20.529 killing process with pid 2671593 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2671593 00:06:20.529 19:10:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2671593 00:06:22.444 00:06:22.444 real 0m2.825s 00:06:22.444 user 0m2.775s 00:06:22.444 sys 0m0.484s 00:06:22.444 19:10:41 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.444 19:10:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.444 ************************************ 00:06:22.444 END TEST dpdk_mem_utility 00:06:22.444 ************************************ 00:06:22.444 19:10:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.444 19:10:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:22.444 19:10:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.444 19:10:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.444 19:10:41 -- common/autotest_common.sh@10 -- # set +x 00:06:22.444 ************************************ 00:06:22.444 START TEST event 00:06:22.444 ************************************ 00:06:22.444 19:10:41 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:22.444 * Looking for test storage... 00:06:22.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:22.444 19:10:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:22.444 19:10:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:22.444 19:10:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:22.444 19:10:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:22.445 19:10:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.445 19:10:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.445 ************************************ 00:06:22.445 START TEST event_perf 00:06:22.445 ************************************ 00:06:22.445 19:10:41 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:22.445 Running I/O for 1 seconds...[2024-07-22 19:10:41.264813] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:22.445 [2024-07-22 19:10:41.264926] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672316 ] 00:06:22.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.445 [2024-07-22 19:10:41.382169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.705 [2024-07-22 19:10:41.560703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.705 [2024-07-22 19:10:41.560787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.705 [2024-07-22 19:10:41.560901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.705 Running I/O for 1 seconds...[2024-07-22 19:10:41.560927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.093 00:06:24.093 lcore 0: 185714 00:06:24.093 lcore 1: 185714 00:06:24.093 lcore 2: 185713 00:06:24.093 lcore 3: 185716 00:06:24.093 done. 00:06:24.093 00:06:24.093 real 0m1.630s 00:06:24.093 user 0m4.470s 00:06:24.093 sys 0m0.156s 00:06:24.093 19:10:42 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.093 19:10:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.093 ************************************ 00:06:24.093 END TEST event_perf 00:06:24.093 ************************************ 00:06:24.093 19:10:42 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.093 19:10:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:24.093 19:10:42 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.093 19:10:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.093 19:10:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.093 ************************************ 00:06:24.093 START TEST event_reactor 00:06:24.093 ************************************ 00:06:24.093 19:10:42 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:24.093 [2024-07-22 19:10:42.968211] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:24.093 [2024-07-22 19:10:42.968311] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672676 ] 00:06:24.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.353 [2024-07-22 19:10:43.078999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.353 [2024-07-22 19:10:43.251044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.743 test_start 00:06:25.743 oneshot 00:06:25.743 tick 100 00:06:25.743 tick 100 00:06:25.743 tick 250 00:06:25.743 tick 100 00:06:25.743 tick 100 00:06:25.743 tick 100 00:06:25.743 tick 250 00:06:25.743 tick 500 00:06:25.743 tick 100 00:06:25.743 tick 100 00:06:25.743 tick 250 00:06:25.743 tick 100 00:06:25.743 tick 100 00:06:25.743 test_end 00:06:25.743 00:06:25.743 real 0m1.608s 00:06:25.743 user 0m1.479s 00:06:25.743 sys 0m0.123s 00:06:25.743 19:10:44 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.743 19:10:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:25.743 ************************************ 00:06:25.743 END TEST event_reactor 00:06:25.743 ************************************ 00:06:25.743 19:10:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.743 19:10:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.743 19:10:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:25.743 19:10:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.743 19:10:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.743 ************************************ 00:06:25.743 START TEST event_reactor_perf 00:06:25.743 ************************************ 00:06:25.743 19:10:44 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.743 [2024-07-22 19:10:44.655556] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:25.743 [2024-07-22 19:10:44.655654] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673028 ] 00:06:26.055 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.055 [2024-07-22 19:10:44.770448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.055 [2024-07-22 19:10:44.948051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.440 test_start 00:06:27.440 test_end 00:06:27.440 Performance: 294134 events per second 00:06:27.440 00:06:27.440 real 0m1.620s 00:06:27.440 user 0m1.482s 00:06:27.440 sys 0m0.131s 00:06:27.440 19:10:46 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.440 19:10:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.440 ************************************ 00:06:27.440 END TEST event_reactor_perf 00:06:27.440 ************************************ 00:06:27.440 19:10:46 event -- common/autotest_common.sh@1142 -- # return 0 00:06:27.440 19:10:46 event -- event/event.sh@49 -- # uname -s 00:06:27.440 19:10:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.440 19:10:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:27.440 19:10:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.440 19:10:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.440 19:10:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.440 ************************************ 00:06:27.440 START TEST event_scheduler 00:06:27.440 ************************************ 00:06:27.440 19:10:46 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:27.701 * Looking for test storage... 00:06:27.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:27.701 19:10:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:27.701 19:10:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2673418 00:06:27.701 19:10:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.701 19:10:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:27.701 19:10:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2673418 00:06:27.701 19:10:46 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2673418 ']' 00:06:27.701 19:10:46 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.701 19:10:46 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.701 19:10:46 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.701 19:10:46 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.701 19:10:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.701 [2024-07-22 19:10:46.486818] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:27.701 [2024-07-22 19:10:46.486926] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673418 ] 00:06:27.701 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.701 [2024-07-22 19:10:46.586517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.962 [2024-07-22 19:10:46.722503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.962 [2024-07-22 19:10:46.722737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.962 [2024-07-22 19:10:46.722836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.962 [2024-07-22 19:10:46.722859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.533 19:10:47 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.533 19:10:47 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:28.533 19:10:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:28.534 19:10:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.534 19:10:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.534 [2024-07-22 19:10:47.224713] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:28.534 [2024-07-22 19:10:47.224736] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:28.534 [2024-07-22 19:10:47.224756] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:28.534 [2024-07-22 19:10:47.224764] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:28.534 [2024-07-22 19:10:47.224773] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:28.534 19:10:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.534 19:10:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:28.534 19:10:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.534 19:10:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.534 [2024-07-22 19:10:47.395142] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:28.534 19:10:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.534 19:10:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:28.534 19:10:47 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.534 19:10:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.534 19:10:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.534 ************************************ 00:06:28.534 START TEST scheduler_create_thread 00:06:28.534 ************************************ 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.534 2 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.534 3 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.534 4 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.534 5 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.534 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.795 6 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.795 7 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.795 8 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.795 9 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.795 10 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.795 19:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.180 19:10:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.180 19:10:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:30.180 19:10:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:30.181 19:10:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.181 19:10:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.751 19:10:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.752 19:10:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:30.752 19:10:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.752 19:10:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.695 19:10:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.695 19:10:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:31.695 19:10:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:31.696 19:10:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.696 19:10:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.642 19:10:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.642 00:06:32.642 real 0m3.893s 00:06:32.642 user 0m0.027s 00:06:32.642 sys 0m0.004s 00:06:32.642 19:10:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.642 19:10:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.642 ************************************ 00:06:32.642 END TEST scheduler_create_thread 00:06:32.642 ************************************ 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:32.642 19:10:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:32.642 19:10:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2673418 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2673418 ']' 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2673418 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2673418 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2673418' 00:06:32.642 killing process with pid 2673418 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2673418 00:06:32.642 19:10:51 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2673418 00:06:32.903 [2024-07-22 19:10:51.707341] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:33.475 00:06:33.475 real 0m6.029s 00:06:33.475 user 0m12.443s 00:06:33.475 sys 0m0.419s 00:06:33.475 19:10:52 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.475 19:10:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.475 ************************************ 00:06:33.475 END TEST event_scheduler 00:06:33.475 ************************************ 00:06:33.475 19:10:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:33.475 19:10:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:33.475 19:10:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:33.475 19:10:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.475 19:10:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.475 19:10:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.475 ************************************ 00:06:33.475 START TEST app_repeat 00:06:33.475 ************************************ 00:06:33.475 19:10:52 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:33.475 19:10:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.475 19:10:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2674795 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2674795' 00:06:33.736 Process app_repeat pid: 2674795 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:33.736 spdk_app_start Round 0 00:06:33.736 19:10:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2674795 /var/tmp/spdk-nbd.sock 00:06:33.736 19:10:52 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2674795 ']' 00:06:33.736 19:10:52 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.736 19:10:52 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.736 19:10:52 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.736 19:10:52 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.736 19:10:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.736 [2024-07-22 19:10:52.485193] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:33.736 [2024-07-22 19:10:52.485312] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674795 ] 00:06:33.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.736 [2024-07-22 19:10:52.611539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.996 [2024-07-22 19:10:52.790909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.996 [2024-07-22 19:10:52.790931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.580 19:10:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.580 19:10:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:34.580 19:10:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.580 Malloc0 00:06:34.580 19:10:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.841 Malloc1 00:06:34.841 19:10:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.841 19:10:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.102 /dev/nbd0 00:06:35.102 19:10:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.102 19:10:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.102 1+0 records in 00:06:35.102 1+0 records out 00:06:35.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273421 s, 15.0 MB/s 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.102 19:10:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.102 19:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.102 19:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.102 19:10:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.102 /dev/nbd1 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.363 1+0 records in 00:06:35.363 1+0 records out 00:06:35.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245337 s, 16.7 MB/s 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.363 19:10:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.363 { 00:06:35.363 "nbd_device": "/dev/nbd0", 00:06:35.363 "bdev_name": "Malloc0" 00:06:35.363 }, 00:06:35.363 { 00:06:35.363 "nbd_device": "/dev/nbd1", 00:06:35.363 "bdev_name": "Malloc1" 00:06:35.363 } 00:06:35.363 ]' 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.363 { 00:06:35.363 "nbd_device": "/dev/nbd0", 00:06:35.363 "bdev_name": "Malloc0" 00:06:35.363 }, 00:06:35.363 { 00:06:35.363 "nbd_device": "/dev/nbd1", 00:06:35.363 "bdev_name": "Malloc1" 00:06:35.363 } 00:06:35.363 ]' 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.363 /dev/nbd1' 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.363 /dev/nbd1' 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.363 256+0 records in 00:06:35.363 256+0 records out 00:06:35.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012473 s, 84.1 MB/s 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.363 19:10:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.625 256+0 records in 00:06:35.625 256+0 records out 00:06:35.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146469 s, 71.6 MB/s 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.625 256+0 records in 00:06:35.625 256+0 records out 00:06:35.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235591 s, 44.5 MB/s 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.625 19:10:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.886 19:10:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.147 19:10:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.147 19:10:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.408 19:10:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.350 [2024-07-22 19:10:56.150741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.610 [2024-07-22 19:10:56.317506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.611 [2024-07-22 19:10:56.317508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.611 [2024-07-22 19:10:56.455690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.611 [2024-07-22 19:10:56.455749] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.523 19:10:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.523 19:10:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:39.523 spdk_app_start Round 1 00:06:39.523 19:10:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2674795 /var/tmp/spdk-nbd.sock 00:06:39.523 19:10:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2674795 ']' 00:06:39.523 19:10:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.523 19:10:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.523 19:10:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.523 19:10:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.523 19:10:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.523 19:10:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.523 19:10:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:39.523 19:10:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.784 Malloc0 00:06:39.784 19:10:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.045 Malloc1 00:06:40.045 19:10:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.045 /dev/nbd0 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.045 1+0 records in 00:06:40.045 1+0 records out 00:06:40.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214551 s, 19.1 MB/s 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.045 19:10:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.045 19:10:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.305 /dev/nbd1 00:06:40.306 19:10:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.306 19:10:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.306 1+0 records in 00:06:40.306 1+0 records out 00:06:40.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213515 s, 19.2 MB/s 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.306 19:10:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:40.306 19:10:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.306 19:10:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.306 19:10:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.306 19:10:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.306 19:10:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.566 { 00:06:40.566 "nbd_device": "/dev/nbd0", 00:06:40.566 "bdev_name": "Malloc0" 00:06:40.566 }, 00:06:40.566 { 00:06:40.566 "nbd_device": "/dev/nbd1", 00:06:40.566 "bdev_name": "Malloc1" 00:06:40.566 } 00:06:40.566 ]' 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.566 { 00:06:40.566 "nbd_device": "/dev/nbd0", 00:06:40.566 "bdev_name": "Malloc0" 00:06:40.566 }, 00:06:40.566 { 00:06:40.566 "nbd_device": "/dev/nbd1", 00:06:40.566 "bdev_name": "Malloc1" 00:06:40.566 } 00:06:40.566 ]' 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.566 /dev/nbd1' 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.566 /dev/nbd1' 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.566 256+0 records in 00:06:40.566 256+0 records out 00:06:40.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118572 s, 88.4 MB/s 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.566 256+0 records in 00:06:40.566 256+0 records out 00:06:40.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01628 s, 64.4 MB/s 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.566 256+0 records in 00:06:40.566 256+0 records out 00:06:40.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177973 s, 58.9 MB/s 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.566 19:10:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.567 19:10:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.567 19:10:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.567 19:10:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.567 19:10:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.567 19:10:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.567 19:10:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.567 19:10:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.827 19:10:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.087 19:10:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.087 19:10:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.087 19:10:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.087 19:10:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.087 19:10:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.088 19:10:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.088 19:10:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.088 19:10:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.088 19:10:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.088 19:10:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.088 19:10:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.088 19:10:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.088 19:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.088 19:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.088 19:11:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.088 19:11:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.088 19:11:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.088 19:11:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.088 19:11:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.088 19:11:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.088 19:11:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.088 19:11:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.088 19:11:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.088 19:11:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.348 19:11:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.288 [2024-07-22 19:11:01.223109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.547 [2024-07-22 19:11:01.389707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.547 [2024-07-22 19:11:01.389724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.806 [2024-07-22 19:11:01.527655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.806 [2024-07-22 19:11:01.527700] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.718 19:11:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.718 19:11:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:44.718 spdk_app_start Round 2 00:06:44.719 19:11:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2674795 /var/tmp/spdk-nbd.sock 00:06:44.719 19:11:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2674795 ']' 00:06:44.719 19:11:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.719 19:11:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.719 19:11:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.719 19:11:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.719 19:11:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.719 19:11:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.719 19:11:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:44.719 19:11:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.719 Malloc0 00:06:44.719 19:11:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.017 Malloc1 00:06:45.017 19:11:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.017 19:11:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.278 /dev/nbd0 00:06:45.278 19:11:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.278 19:11:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.278 1+0 records in 00:06:45.278 1+0 records out 00:06:45.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240184 s, 17.1 MB/s 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:45.278 19:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.278 19:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.278 19:11:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.278 /dev/nbd1 00:06:45.278 19:11:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.278 19:11:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:45.278 19:11:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:45.279 19:11:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:45.279 19:11:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.279 1+0 records in 00:06:45.279 1+0 records out 00:06:45.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312556 s, 13.1 MB/s 00:06:45.279 19:11:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.279 19:11:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:45.279 19:11:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.279 19:11:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:45.279 19:11:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:45.279 19:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.279 19:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.279 19:11:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.279 19:11:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.279 19:11:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.540 { 00:06:45.540 "nbd_device": "/dev/nbd0", 00:06:45.540 "bdev_name": "Malloc0" 00:06:45.540 }, 00:06:45.540 { 00:06:45.540 "nbd_device": "/dev/nbd1", 00:06:45.540 "bdev_name": "Malloc1" 00:06:45.540 } 00:06:45.540 ]' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.540 { 00:06:45.540 "nbd_device": "/dev/nbd0", 00:06:45.540 "bdev_name": "Malloc0" 00:06:45.540 }, 00:06:45.540 { 00:06:45.540 "nbd_device": "/dev/nbd1", 00:06:45.540 "bdev_name": "Malloc1" 00:06:45.540 } 00:06:45.540 ]' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.540 /dev/nbd1' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.540 /dev/nbd1' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.540 256+0 records in 00:06:45.540 256+0 records out 00:06:45.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125059 s, 83.8 MB/s 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.540 256+0 records in 00:06:45.540 256+0 records out 00:06:45.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139086 s, 75.4 MB/s 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.540 256+0 records in 00:06:45.540 256+0 records out 00:06:45.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203672 s, 51.5 MB/s 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.540 19:11:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.801 19:11:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.060 19:11:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.060 19:11:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.060 19:11:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.061 19:11:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.061 19:11:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.061 19:11:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.061 19:11:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.061 19:11:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.061 19:11:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.061 19:11:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.061 19:11:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.321 19:11:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.321 19:11:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.581 19:11:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.523 [2024-07-22 19:11:06.277025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.523 [2024-07-22 19:11:06.443959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.523 [2024-07-22 19:11:06.443962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.784 [2024-07-22 19:11:06.582321] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.784 [2024-07-22 19:11:06.582371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.697 19:11:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2674795 /var/tmp/spdk-nbd.sock 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2674795 ']' 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:49.697 19:11:08 event.app_repeat -- event/event.sh@39 -- # killprocess 2674795 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2674795 ']' 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2674795 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2674795 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2674795' 00:06:49.697 killing process with pid 2674795 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2674795 00:06:49.697 19:11:08 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2674795 00:06:50.641 spdk_app_start is called in Round 0. 00:06:50.641 Shutdown signal received, stop current app iteration 00:06:50.641 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:06:50.641 spdk_app_start is called in Round 1. 00:06:50.641 Shutdown signal received, stop current app iteration 00:06:50.641 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:06:50.641 spdk_app_start is called in Round 2. 00:06:50.641 Shutdown signal received, stop current app iteration 00:06:50.641 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:06:50.641 spdk_app_start is called in Round 3. 00:06:50.641 Shutdown signal received, stop current app iteration 00:06:50.641 19:11:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:50.641 19:11:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:50.641 00:06:50.641 real 0m16.955s 00:06:50.641 user 0m34.839s 00:06:50.641 sys 0m2.200s 00:06:50.641 19:11:09 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.641 19:11:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.641 ************************************ 00:06:50.641 END TEST app_repeat 00:06:50.641 ************************************ 00:06:50.641 19:11:09 event -- common/autotest_common.sh@1142 -- # return 0 00:06:50.641 19:11:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:50.641 19:11:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:50.641 19:11:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.641 19:11:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.641 19:11:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.641 ************************************ 00:06:50.641 START TEST cpu_locks 00:06:50.641 ************************************ 00:06:50.641 19:11:09 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:50.641 * Looking for test storage... 00:06:50.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:50.641 19:11:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:50.641 19:11:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:50.641 19:11:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:50.641 19:11:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:50.641 19:11:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.641 19:11:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.641 19:11:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.641 ************************************ 00:06:50.641 START TEST default_locks 00:06:50.641 ************************************ 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2678409 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2678409 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2678409 ']' 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.641 19:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.903 [2024-07-22 19:11:09.682737] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:50.903 [2024-07-22 19:11:09.682866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678409 ] 00:06:50.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.903 [2024-07-22 19:11:09.806901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.163 [2024-07-22 19:11:09.984726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.734 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.734 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:51.734 19:11:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2678409 00:06:51.734 19:11:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2678409 00:06:51.734 19:11:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.995 lslocks: write error 00:06:51.995 19:11:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2678409 00:06:51.995 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2678409 ']' 00:06:51.995 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2678409 00:06:51.995 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:51.995 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.995 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2678409 00:06:52.255 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.255 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.255 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2678409' 00:06:52.255 killing process with pid 2678409 00:06:52.255 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2678409 00:06:52.255 19:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2678409 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2678409 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2678409 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2678409 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2678409 ']' 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2678409) - No such process 00:06:54.170 ERROR: process (pid: 2678409) is no longer running 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:54.170 19:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:54.170 00:06:54.170 real 0m3.014s 00:06:54.170 user 0m2.992s 00:06:54.171 sys 0m0.596s 00:06:54.171 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.171 19:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.171 ************************************ 00:06:54.171 END TEST default_locks 00:06:54.171 ************************************ 00:06:54.171 19:11:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:54.171 19:11:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:54.171 19:11:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.171 19:11:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.171 19:11:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.171 ************************************ 00:06:54.171 START TEST default_locks_via_rpc 00:06:54.171 ************************************ 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2679035 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2679035 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2679035 ']' 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.171 19:11:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.171 [2024-07-22 19:11:12.759638] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:54.171 [2024-07-22 19:11:12.759762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679035 ] 00:06:54.171 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.171 [2024-07-22 19:11:12.883549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.171 [2024-07-22 19:11:13.061489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2679035 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2679035 00:06:54.742 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.313 19:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2679035 00:06:55.313 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2679035 ']' 00:06:55.313 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2679035 00:06:55.313 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:55.313 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.313 19:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2679035 00:06:55.313 19:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.313 19:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.313 19:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2679035' 00:06:55.313 killing process with pid 2679035 00:06:55.313 19:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2679035 00:06:55.313 19:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2679035 00:06:57.227 00:06:57.227 real 0m3.004s 00:06:57.227 user 0m2.965s 00:06:57.227 sys 0m0.608s 00:06:57.227 19:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.227 19:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.227 ************************************ 00:06:57.227 END TEST default_locks_via_rpc 00:06:57.227 ************************************ 00:06:57.227 19:11:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.227 19:11:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:57.227 19:11:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.227 19:11:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.227 19:11:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.227 ************************************ 00:06:57.227 START TEST non_locking_app_on_locked_coremask 00:06:57.227 ************************************ 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2679564 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2679564 /var/tmp/spdk.sock 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2679564 ']' 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.227 19:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.227 [2024-07-22 19:11:15.836899] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:57.227 [2024-07-22 19:11:15.837023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679564 ] 00:06:57.227 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.227 [2024-07-22 19:11:15.965591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.227 [2024-07-22 19:11:16.145835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2679820 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2679820 /var/tmp/spdk2.sock 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2679820 ']' 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.797 19:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:58.058 [2024-07-22 19:11:16.802116] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:58.058 [2024-07-22 19:11:16.802238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679820 ] 00:06:58.058 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.058 [2024-07-22 19:11:16.961393] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.058 [2024-07-22 19:11:16.961435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.629 [2024-07-22 19:11:17.309394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.570 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.570 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:59.570 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2679564 00:06:59.570 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2679564 00:06:59.570 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.142 lslocks: write error 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2679564 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2679564 ']' 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2679564 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2679564 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2679564' 00:07:00.142 killing process with pid 2679564 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2679564 00:07:00.142 19:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2679564 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2679820 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2679820 ']' 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2679820 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2679820 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2679820' 00:07:03.445 killing process with pid 2679820 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2679820 00:07:03.445 19:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2679820 00:07:05.360 00:07:05.360 real 0m8.178s 00:07:05.360 user 0m8.244s 00:07:05.360 sys 0m1.122s 00:07:05.360 19:11:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.360 19:11:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.360 ************************************ 00:07:05.360 END TEST non_locking_app_on_locked_coremask 00:07:05.360 ************************************ 00:07:05.360 19:11:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:05.360 19:11:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:05.360 19:11:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.360 19:11:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.360 19:11:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.360 ************************************ 00:07:05.360 START TEST locking_app_on_unlocked_coremask 00:07:05.360 ************************************ 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2681202 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2681202 /var/tmp/spdk.sock 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2681202 ']' 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.360 19:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.360 [2024-07-22 19:11:24.081064] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:05.360 [2024-07-22 19:11:24.081175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681202 ] 00:07:05.360 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.360 [2024-07-22 19:11:24.206859] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.360 [2024-07-22 19:11:24.206900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.621 [2024-07-22 19:11:24.388486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2681536 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2681536 /var/tmp/spdk2.sock 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2681536 ']' 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.216 19:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.216 [2024-07-22 19:11:25.064512] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:06.216 [2024-07-22 19:11:25.064631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681536 ] 00:07:06.216 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.486 [2024-07-22 19:11:25.223914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.747 [2024-07-22 19:11:25.576025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.129 19:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.129 19:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:08.129 19:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2681536 00:07:08.129 19:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2681536 00:07:08.129 19:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.390 lslocks: write error 00:07:08.390 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2681202 00:07:08.390 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2681202 ']' 00:07:08.390 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2681202 00:07:08.390 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:08.390 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.650 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2681202 00:07:08.650 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.650 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.650 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2681202' 00:07:08.650 killing process with pid 2681202 00:07:08.650 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2681202 00:07:08.650 19:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2681202 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2681536 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2681536 ']' 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2681536 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2681536 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2681536' 00:07:11.946 killing process with pid 2681536 00:07:11.946 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2681536 00:07:11.947 19:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2681536 00:07:13.859 00:07:13.859 real 0m8.345s 00:07:13.859 user 0m8.391s 00:07:13.859 sys 0m1.206s 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.859 ************************************ 00:07:13.859 END TEST locking_app_on_unlocked_coremask 00:07:13.859 ************************************ 00:07:13.859 19:11:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:13.859 19:11:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:13.859 19:11:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.859 19:11:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.859 19:11:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.859 ************************************ 00:07:13.859 START TEST locking_app_on_locked_coremask 00:07:13.859 ************************************ 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2682920 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2682920 /var/tmp/spdk.sock 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2682920 ']' 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.859 19:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.859 [2024-07-22 19:11:32.501347] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:13.859 [2024-07-22 19:11:32.501448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682920 ] 00:07:13.859 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.859 [2024-07-22 19:11:32.615492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.859 [2024-07-22 19:11:32.791021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2683254 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2683254 /var/tmp/spdk2.sock 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2683254 /var/tmp/spdk2.sock 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2683254 /var/tmp/spdk2.sock 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2683254 ']' 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.431 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.432 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.432 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.432 19:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.692 [2024-07-22 19:11:33.458274] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:14.692 [2024-07-22 19:11:33.458388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683254 ] 00:07:14.692 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.692 [2024-07-22 19:11:33.623159] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2682920 has claimed it. 00:07:14.692 [2024-07-22 19:11:33.623219] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2683254) - No such process 00:07:15.265 ERROR: process (pid: 2683254) is no longer running 00:07:15.265 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.265 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:15.265 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:15.265 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.265 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.265 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.265 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2682920 00:07:15.265 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2682920 00:07:15.265 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.526 lslocks: write error 00:07:15.526 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2682920 00:07:15.526 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2682920 ']' 00:07:15.526 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2682920 00:07:15.526 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:15.526 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.526 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2682920 00:07:15.787 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.787 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.787 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2682920' 00:07:15.787 killing process with pid 2682920 00:07:15.787 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2682920 00:07:15.787 19:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2682920 00:07:17.700 00:07:17.700 real 0m3.712s 00:07:17.700 user 0m3.838s 00:07:17.700 sys 0m0.772s 00:07:17.700 19:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.700 19:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.700 ************************************ 00:07:17.700 END TEST locking_app_on_locked_coremask 00:07:17.700 ************************************ 00:07:17.700 19:11:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.700 19:11:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:17.700 19:11:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.700 19:11:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.700 19:11:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.700 ************************************ 00:07:17.700 START TEST locking_overlapped_coremask 00:07:17.700 ************************************ 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2683726 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2683726 /var/tmp/spdk.sock 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2683726 ']' 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.700 19:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.700 [2024-07-22 19:11:36.295279] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:17.700 [2024-07-22 19:11:36.295405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683726 ] 00:07:17.700 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.700 [2024-07-22 19:11:36.424152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.700 [2024-07-22 19:11:36.604483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.700 [2024-07-22 19:11:36.604662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.700 [2024-07-22 19:11:36.604665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2683970 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2683970 /var/tmp/spdk2.sock 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2683970 /var/tmp/spdk2.sock 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2683970 /var/tmp/spdk2.sock 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2683970 ']' 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.270 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.531 [2024-07-22 19:11:37.264345] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:18.531 [2024-07-22 19:11:37.264464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683970 ] 00:07:18.531 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.531 [2024-07-22 19:11:37.394955] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2683726 has claimed it. 00:07:18.531 [2024-07-22 19:11:37.394999] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2683970) - No such process 00:07:19.102 ERROR: process (pid: 2683970) is no longer running 00:07:19.102 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.102 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:19.102 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:19.102 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.102 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.102 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2683726 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2683726 ']' 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2683726 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2683726 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2683726' 00:07:19.103 killing process with pid 2683726 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2683726 00:07:19.103 19:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2683726 00:07:20.573 00:07:20.573 real 0m3.323s 00:07:20.573 user 0m8.601s 00:07:20.573 sys 0m0.566s 00:07:20.573 19:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.573 19:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.573 ************************************ 00:07:20.573 END TEST locking_overlapped_coremask 00:07:20.573 ************************************ 00:07:20.834 19:11:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:20.834 19:11:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:20.834 19:11:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.834 19:11:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.834 19:11:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.834 ************************************ 00:07:20.834 START TEST locking_overlapped_coremask_via_rpc 00:07:20.834 ************************************ 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2684444 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2684444 /var/tmp/spdk.sock 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2684444 ']' 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.834 19:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.834 [2024-07-22 19:11:39.696614] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:20.834 [2024-07-22 19:11:39.696743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684444 ] 00:07:20.834 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.095 [2024-07-22 19:11:39.825423] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.095 [2024-07-22 19:11:39.825476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.095 [2024-07-22 19:11:40.008915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.095 [2024-07-22 19:11:40.009011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.095 [2024-07-22 19:11:40.009019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2684679 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2684679 /var/tmp/spdk2.sock 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2684679 ']' 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.665 19:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.968 [2024-07-22 19:11:40.674535] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:21.969 [2024-07-22 19:11:40.674649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684679 ] 00:07:21.969 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.969 [2024-07-22 19:11:40.804800] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.969 [2024-07-22 19:11:40.804835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.232 [2024-07-22 19:11:41.085008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.232 [2024-07-22 19:11:41.085107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.232 [2024-07-22 19:11:41.085132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.173 [2024-07-22 19:11:41.938306] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2684444 has claimed it. 00:07:23.173 request: 00:07:23.173 { 00:07:23.173 "method": "framework_enable_cpumask_locks", 00:07:23.173 "req_id": 1 00:07:23.173 } 00:07:23.173 Got JSON-RPC error response 00:07:23.173 response: 00:07:23.173 { 00:07:23.173 "code": -32603, 00:07:23.173 "message": "Failed to claim CPU core: 2" 00:07:23.173 } 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2684444 /var/tmp/spdk.sock 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2684444 ']' 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.173 19:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.173 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.173 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.173 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2684679 /var/tmp/spdk2.sock 00:07:23.173 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2684679 ']' 00:07:23.173 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.173 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.173 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.173 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.173 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.434 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.434 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.434 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:23.434 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:23.434 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:23.434 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:23.434 00:07:23.434 real 0m2.680s 00:07:23.434 user 0m0.789s 00:07:23.434 sys 0m0.165s 00:07:23.434 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.434 19:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.434 ************************************ 00:07:23.434 END TEST locking_overlapped_coremask_via_rpc 00:07:23.434 ************************************ 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:23.434 19:11:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:23.434 19:11:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2684444 ]] 00:07:23.434 19:11:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2684444 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2684444 ']' 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2684444 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2684444 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2684444' 00:07:23.434 killing process with pid 2684444 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2684444 00:07:23.434 19:11:42 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2684444 00:07:25.347 19:11:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2684679 ]] 00:07:25.347 19:11:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2684679 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2684679 ']' 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2684679 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2684679 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2684679' 00:07:25.347 killing process with pid 2684679 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2684679 00:07:25.347 19:11:44 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2684679 00:07:26.288 19:11:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:26.288 19:11:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:26.288 19:11:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2684444 ]] 00:07:26.288 19:11:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2684444 00:07:26.288 19:11:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2684444 ']' 00:07:26.288 19:11:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2684444 00:07:26.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2684444) - No such process 00:07:26.288 19:11:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2684444 is not found' 00:07:26.288 Process with pid 2684444 is not found 00:07:26.288 19:11:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2684679 ]] 00:07:26.288 19:11:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2684679 00:07:26.288 19:11:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2684679 ']' 00:07:26.288 19:11:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2684679 00:07:26.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2684679) - No such process 00:07:26.288 19:11:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2684679 is not found' 00:07:26.288 Process with pid 2684679 is not found 00:07:26.288 19:11:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:26.288 00:07:26.288 real 0m35.777s 00:07:26.288 user 0m56.651s 00:07:26.288 sys 0m6.157s 00:07:26.288 19:11:45 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.288 19:11:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.288 ************************************ 00:07:26.288 END TEST cpu_locks 00:07:26.288 ************************************ 00:07:26.549 19:11:45 event -- common/autotest_common.sh@1142 -- # return 0 00:07:26.549 00:07:26.549 real 1m4.194s 00:07:26.549 user 1m51.586s 00:07:26.549 sys 0m9.572s 00:07:26.549 19:11:45 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.549 19:11:45 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.549 ************************************ 00:07:26.549 END TEST event 00:07:26.549 ************************************ 00:07:26.549 19:11:45 -- common/autotest_common.sh@1142 -- # return 0 00:07:26.549 19:11:45 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:26.549 19:11:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.549 19:11:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.549 19:11:45 -- common/autotest_common.sh@10 -- # set +x 00:07:26.549 ************************************ 00:07:26.549 START TEST thread 00:07:26.549 ************************************ 00:07:26.549 19:11:45 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:26.549 * Looking for test storage... 00:07:26.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:26.549 19:11:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:26.549 19:11:45 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:26.549 19:11:45 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.549 19:11:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.549 ************************************ 00:07:26.549 START TEST thread_poller_perf 00:07:26.549 ************************************ 00:07:26.549 19:11:45 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:26.809 [2024-07-22 19:11:45.530577] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:26.810 [2024-07-22 19:11:45.530697] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2685787 ] 00:07:26.810 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.810 [2024-07-22 19:11:45.657941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.070 [2024-07-22 19:11:45.835954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.070 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:28.453 ====================================== 00:07:28.453 busy:2412930088 (cyc) 00:07:28.453 total_run_count: 284000 00:07:28.453 tsc_hz: 2400000000 (cyc) 00:07:28.453 ====================================== 00:07:28.453 poller_cost: 8496 (cyc), 3540 (nsec) 00:07:28.453 00:07:28.453 real 0m1.642s 00:07:28.453 user 0m1.487s 00:07:28.453 sys 0m0.148s 00:07:28.453 19:11:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.453 19:11:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:28.453 ************************************ 00:07:28.453 END TEST thread_poller_perf 00:07:28.453 ************************************ 00:07:28.453 19:11:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:28.453 19:11:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:28.453 19:11:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:28.453 19:11:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.453 19:11:47 thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.453 ************************************ 00:07:28.453 START TEST thread_poller_perf 00:07:28.453 ************************************ 00:07:28.453 19:11:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:28.453 [2024-07-22 19:11:47.244916] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:28.453 [2024-07-22 19:11:47.245013] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686144 ] 00:07:28.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.453 [2024-07-22 19:11:47.361584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.714 [2024-07-22 19:11:47.539376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.714 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:30.099 ====================================== 00:07:30.099 busy:2403022980 (cyc) 00:07:30.099 total_run_count: 3680000 00:07:30.099 tsc_hz: 2400000000 (cyc) 00:07:30.099 ====================================== 00:07:30.099 poller_cost: 652 (cyc), 271 (nsec) 00:07:30.099 00:07:30.099 real 0m1.621s 00:07:30.099 user 0m1.474s 00:07:30.099 sys 0m0.141s 00:07:30.099 19:11:48 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.099 19:11:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.099 ************************************ 00:07:30.099 END TEST thread_poller_perf 00:07:30.099 ************************************ 00:07:30.099 19:11:48 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:30.099 19:11:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:30.099 00:07:30.099 real 0m3.514s 00:07:30.099 user 0m3.063s 00:07:30.099 sys 0m0.453s 00:07:30.099 19:11:48 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.099 19:11:48 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.099 ************************************ 00:07:30.099 END TEST thread 00:07:30.099 ************************************ 00:07:30.099 19:11:48 -- common/autotest_common.sh@1142 -- # return 0 00:07:30.099 19:11:48 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:30.099 19:11:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.099 19:11:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.099 19:11:48 -- common/autotest_common.sh@10 -- # set +x 00:07:30.099 ************************************ 00:07:30.099 START TEST accel 00:07:30.099 ************************************ 00:07:30.099 19:11:48 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:30.099 * Looking for test storage... 00:07:30.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:30.099 19:11:49 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:30.099 19:11:49 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:30.099 19:11:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:30.099 19:11:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2686543 00:07:30.099 19:11:49 accel -- accel/accel.sh@63 -- # waitforlisten 2686543 00:07:30.099 19:11:49 accel -- common/autotest_common.sh@829 -- # '[' -z 2686543 ']' 00:07:30.099 19:11:49 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.099 19:11:49 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.099 19:11:49 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:30.099 19:11:49 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.099 19:11:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:30.099 19:11:49 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.099 19:11:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.099 19:11:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.099 19:11:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.099 19:11:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.099 19:11:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.099 19:11:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.099 19:11:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:30.099 19:11:49 accel -- accel/accel.sh@41 -- # jq -r . 00:07:30.360 [2024-07-22 19:11:49.131968] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:30.360 [2024-07-22 19:11:49.132089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686543 ] 00:07:30.360 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.360 [2024-07-22 19:11:49.246057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.620 [2024-07-22 19:11:49.420242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.192 19:11:49 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.192 19:11:49 accel -- common/autotest_common.sh@862 -- # return 0 00:07:31.192 19:11:49 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:31.192 19:11:49 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:31.192 19:11:49 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:31.192 19:11:49 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:31.192 19:11:49 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:31.192 19:11:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:31.192 19:11:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:31.192 19:11:50 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # IFS== 00:07:31.193 19:11:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:31.193 19:11:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:31.193 19:11:50 accel -- accel/accel.sh@75 -- # killprocess 2686543 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@948 -- # '[' -z 2686543 ']' 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@952 -- # kill -0 2686543 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@953 -- # uname 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2686543 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2686543' 00:07:31.193 killing process with pid 2686543 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@967 -- # kill 2686543 00:07:31.193 19:11:50 accel -- common/autotest_common.sh@972 -- # wait 2686543 00:07:33.107 19:11:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:33.107 19:11:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:33.107 19:11:51 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.107 19:11:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.107 19:11:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.107 19:11:51 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:33.107 19:11:51 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:33.107 19:11:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:33.107 19:11:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.108 19:11:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.108 19:11:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.108 19:11:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.108 19:11:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.108 19:11:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:33.108 19:11:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:33.108 19:11:51 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.108 19:11:51 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:33.108 19:11:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.108 19:11:51 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:33.108 19:11:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:33.108 19:11:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.108 19:11:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.108 ************************************ 00:07:33.108 START TEST accel_missing_filename 00:07:33.108 ************************************ 00:07:33.108 19:11:51 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:33.108 19:11:51 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:33.108 19:11:51 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:33.108 19:11:51 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:33.108 19:11:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.108 19:11:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:33.108 19:11:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.108 19:11:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:33.108 19:11:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:33.108 19:11:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:33.108 19:11:51 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.108 19:11:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.108 19:11:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.108 19:11:51 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.108 19:11:51 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.108 19:11:51 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:33.108 19:11:51 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:33.108 [2024-07-22 19:11:51.938208] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:33.108 [2024-07-22 19:11:51.938326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687223 ] 00:07:33.108 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.108 [2024-07-22 19:11:52.061326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.369 [2024-07-22 19:11:52.239900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.630 [2024-07-22 19:11:52.383014] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.891 [2024-07-22 19:11:52.743673] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:34.152 A filename is required. 00:07:34.152 19:11:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:34.152 19:11:53 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.152 19:11:53 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:34.152 19:11:53 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:34.152 19:11:53 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:34.152 19:11:53 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.152 00:07:34.152 real 0m1.145s 00:07:34.152 user 0m0.996s 00:07:34.152 sys 0m0.186s 00:07:34.152 19:11:53 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.152 19:11:53 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:34.152 ************************************ 00:07:34.152 END TEST accel_missing_filename 00:07:34.152 ************************************ 00:07:34.152 19:11:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.152 19:11:53 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:34.152 19:11:53 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:34.152 19:11:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.152 19:11:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.152 ************************************ 00:07:34.152 START TEST accel_compress_verify 00:07:34.152 ************************************ 00:07:34.152 19:11:53 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:34.153 19:11:53 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:34.153 19:11:53 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:34.153 19:11:53 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:34.153 19:11:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.153 19:11:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:34.153 19:11:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.153 19:11:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:34.414 19:11:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:34.414 19:11:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:34.414 19:11:53 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.414 19:11:53 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.414 19:11:53 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.414 19:11:53 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.414 19:11:53 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.414 19:11:53 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:34.414 19:11:53 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:34.414 [2024-07-22 19:11:53.160083] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:34.414 [2024-07-22 19:11:53.160236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687337 ] 00:07:34.414 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.414 [2024-07-22 19:11:53.281522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.674 [2024-07-22 19:11:53.460199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.674 [2024-07-22 19:11:53.604074] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.245 [2024-07-22 19:11:53.965776] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:35.506 00:07:35.506 Compression does not support the verify option, aborting. 00:07:35.506 19:11:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:35.506 19:11:54 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.506 19:11:54 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:35.506 19:11:54 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:35.506 19:11:54 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:35.506 19:11:54 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.506 00:07:35.506 real 0m1.148s 00:07:35.506 user 0m0.997s 00:07:35.506 sys 0m0.187s 00:07:35.506 19:11:54 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.506 19:11:54 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:35.506 ************************************ 00:07:35.506 END TEST accel_compress_verify 00:07:35.507 ************************************ 00:07:35.507 19:11:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.507 19:11:54 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:35.507 19:11:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.507 19:11:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.507 19:11:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.507 ************************************ 00:07:35.507 START TEST accel_wrong_workload 00:07:35.507 ************************************ 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:35.507 19:11:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:35.507 19:11:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:35.507 19:11:54 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.507 19:11:54 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.507 19:11:54 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.507 19:11:54 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.507 19:11:54 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.507 19:11:54 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:35.507 19:11:54 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:35.507 Unsupported workload type: foobar 00:07:35.507 [2024-07-22 19:11:54.370366] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:35.507 accel_perf options: 00:07:35.507 [-h help message] 00:07:35.507 [-q queue depth per core] 00:07:35.507 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:35.507 [-T number of threads per core 00:07:35.507 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:35.507 [-t time in seconds] 00:07:35.507 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:35.507 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:35.507 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:35.507 [-l for compress/decompress workloads, name of uncompressed input file 00:07:35.507 [-S for crc32c workload, use this seed value (default 0) 00:07:35.507 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:35.507 [-f for fill workload, use this BYTE value (default 255) 00:07:35.507 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:35.507 [-y verify result if this switch is on] 00:07:35.507 [-a tasks to allocate per core (default: same value as -q)] 00:07:35.507 Can be used to spread operations across a wider range of memory. 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.507 00:07:35.507 real 0m0.075s 00:07:35.507 user 0m0.083s 00:07:35.507 sys 0m0.036s 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.507 19:11:54 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:35.507 ************************************ 00:07:35.507 END TEST accel_wrong_workload 00:07:35.507 ************************************ 00:07:35.507 19:11:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.507 19:11:54 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:35.507 19:11:54 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:35.507 19:11:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.507 19:11:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.768 ************************************ 00:07:35.768 START TEST accel_negative_buffers 00:07:35.768 ************************************ 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:35.768 19:11:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:35.768 19:11:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:35.768 19:11:54 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.768 19:11:54 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.768 19:11:54 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.768 19:11:54 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.768 19:11:54 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.768 19:11:54 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:35.768 19:11:54 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:35.768 -x option must be non-negative. 00:07:35.768 [2024-07-22 19:11:54.517066] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:35.768 accel_perf options: 00:07:35.768 [-h help message] 00:07:35.768 [-q queue depth per core] 00:07:35.768 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:35.768 [-T number of threads per core 00:07:35.768 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:35.768 [-t time in seconds] 00:07:35.768 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:35.768 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:35.768 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:35.768 [-l for compress/decompress workloads, name of uncompressed input file 00:07:35.768 [-S for crc32c workload, use this seed value (default 0) 00:07:35.768 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:35.768 [-f for fill workload, use this BYTE value (default 255) 00:07:35.768 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:35.768 [-y verify result if this switch is on] 00:07:35.768 [-a tasks to allocate per core (default: same value as -q)] 00:07:35.768 Can be used to spread operations across a wider range of memory. 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:35.768 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.769 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:35.769 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.769 00:07:35.769 real 0m0.077s 00:07:35.769 user 0m0.079s 00:07:35.769 sys 0m0.040s 00:07:35.769 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.769 19:11:54 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:35.769 ************************************ 00:07:35.769 END TEST accel_negative_buffers 00:07:35.769 ************************************ 00:07:35.769 19:11:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.769 19:11:54 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:35.769 19:11:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:35.769 19:11:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.769 19:11:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.769 ************************************ 00:07:35.769 START TEST accel_crc32c 00:07:35.769 ************************************ 00:07:35.769 19:11:54 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:35.769 19:11:54 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:35.769 [2024-07-22 19:11:54.661100] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:35.769 [2024-07-22 19:11:54.661220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687676 ] 00:07:36.038 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.038 [2024-07-22 19:11:54.787040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.038 [2024-07-22 19:11:54.966468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:36.299 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.300 19:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:38.215 19:11:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.215 00:07:38.215 real 0m2.152s 00:07:38.215 user 0m1.977s 00:07:38.215 sys 0m0.187s 00:07:38.215 19:11:56 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.215 19:11:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:38.215 ************************************ 00:07:38.215 END TEST accel_crc32c 00:07:38.215 ************************************ 00:07:38.215 19:11:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.215 19:11:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:38.215 19:11:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:38.215 19:11:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.215 19:11:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.215 ************************************ 00:07:38.215 START TEST accel_crc32c_C2 00:07:38.215 ************************************ 00:07:38.215 19:11:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:38.215 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.215 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:38.215 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.216 19:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:38.216 [2024-07-22 19:11:56.877166] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:38.216 [2024-07-22 19:11:56.877281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688226 ] 00:07:38.216 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.216 [2024-07-22 19:11:56.996855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.477 [2024-07-22 19:11:57.175580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.477 19:11:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.393 00:07:40.393 real 0m2.142s 00:07:40.393 user 0m1.966s 00:07:40.393 sys 0m0.188s 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.393 19:11:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:40.393 ************************************ 00:07:40.393 END TEST accel_crc32c_C2 00:07:40.393 ************************************ 00:07:40.393 19:11:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.393 19:11:59 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:40.393 19:11:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:40.393 19:11:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.393 19:11:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.393 ************************************ 00:07:40.393 START TEST accel_copy 00:07:40.393 ************************************ 00:07:40.393 19:11:59 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:40.393 19:11:59 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:40.393 [2024-07-22 19:11:59.096325] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:40.393 [2024-07-22 19:11:59.096437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688710 ] 00:07:40.393 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.393 [2024-07-22 19:11:59.214082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.655 [2024-07-22 19:11:59.391935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.655 19:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:42.570 19:12:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.570 00:07:42.570 real 0m2.149s 00:07:42.570 user 0m1.981s 00:07:42.570 sys 0m0.179s 00:07:42.570 19:12:01 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.570 19:12:01 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:42.570 ************************************ 00:07:42.570 END TEST accel_copy 00:07:42.570 ************************************ 00:07:42.570 19:12:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.571 19:12:01 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.571 19:12:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:42.571 19:12:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.571 19:12:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.571 ************************************ 00:07:42.571 START TEST accel_fill 00:07:42.571 ************************************ 00:07:42.571 19:12:01 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:42.571 19:12:01 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:42.571 [2024-07-22 19:12:01.310820] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:42.571 [2024-07-22 19:12:01.310935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689151 ] 00:07:42.571 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.571 [2024-07-22 19:12:01.425832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.832 [2024-07-22 19:12:01.605932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 19:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:44.747 19:12:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.747 00:07:44.747 real 0m2.142s 00:07:44.747 user 0m1.979s 00:07:44.747 sys 0m0.175s 00:07:44.747 19:12:03 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.747 19:12:03 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:44.747 ************************************ 00:07:44.747 END TEST accel_fill 00:07:44.747 ************************************ 00:07:44.747 19:12:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.747 19:12:03 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:44.747 19:12:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:44.747 19:12:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.747 19:12:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.747 ************************************ 00:07:44.747 START TEST accel_copy_crc32c 00:07:44.747 ************************************ 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:44.747 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:44.747 [2024-07-22 19:12:03.523007] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:44.747 [2024-07-22 19:12:03.523119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689653 ] 00:07:44.747 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.747 [2024-07-22 19:12:03.644785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.008 [2024-07-22 19:12:03.830380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.269 19:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.183 00:07:47.183 real 0m2.151s 00:07:47.183 user 0m1.966s 00:07:47.183 sys 0m0.198s 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.183 19:12:05 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:47.183 ************************************ 00:07:47.183 END TEST accel_copy_crc32c 00:07:47.183 ************************************ 00:07:47.183 19:12:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.183 19:12:05 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:47.183 19:12:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:47.183 19:12:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.183 19:12:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.183 ************************************ 00:07:47.183 START TEST accel_copy_crc32c_C2 00:07:47.183 ************************************ 00:07:47.183 19:12:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:47.183 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.183 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:47.184 19:12:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:47.184 [2024-07-22 19:12:05.746490] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:47.184 [2024-07-22 19:12:05.746602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690221 ] 00:07:47.184 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.184 [2024-07-22 19:12:05.866948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.184 [2024-07-22 19:12:06.044626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.444 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.445 19:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.358 00:07:49.358 real 0m2.145s 00:07:49.358 user 0m1.974s 00:07:49.358 sys 0m0.182s 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.358 19:12:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:49.358 ************************************ 00:07:49.358 END TEST accel_copy_crc32c_C2 00:07:49.358 ************************************ 00:07:49.358 19:12:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.358 19:12:07 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:49.358 19:12:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.358 19:12:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.358 19:12:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.358 ************************************ 00:07:49.358 START TEST accel_dualcast 00:07:49.358 ************************************ 00:07:49.358 19:12:07 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.358 19:12:07 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.359 19:12:07 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.359 19:12:07 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:49.359 19:12:07 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:49.359 [2024-07-22 19:12:07.961854] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:49.359 [2024-07-22 19:12:07.961958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690581 ] 00:07:49.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.359 [2024-07-22 19:12:08.081342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.359 [2024-07-22 19:12:08.260617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.620 19:12:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.539 19:12:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.539 19:12:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.539 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.539 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.539 19:12:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:51.540 19:12:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.540 00:07:51.540 real 0m2.144s 00:07:51.540 user 0m1.964s 00:07:51.540 sys 0m0.191s 00:07:51.540 19:12:10 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.540 19:12:10 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:51.540 ************************************ 00:07:51.540 END TEST accel_dualcast 00:07:51.540 ************************************ 00:07:51.540 19:12:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.540 19:12:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:51.540 19:12:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:51.540 19:12:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.540 19:12:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.540 ************************************ 00:07:51.540 START TEST accel_compare 00:07:51.540 ************************************ 00:07:51.540 19:12:10 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:51.540 19:12:10 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:51.540 [2024-07-22 19:12:10.178817] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:51.540 [2024-07-22 19:12:10.178922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691296 ] 00:07:51.540 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.540 [2024-07-22 19:12:10.289982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.540 [2024-07-22 19:12:10.470081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.871 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.872 19:12:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.314 19:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.580 19:12:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.580 19:12:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:53.580 19:12:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.580 00:07:53.580 real 0m2.136s 00:07:53.580 user 0m1.963s 00:07:53.580 sys 0m0.185s 00:07:53.580 19:12:12 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.580 19:12:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:53.580 ************************************ 00:07:53.580 END TEST accel_compare 00:07:53.580 ************************************ 00:07:53.580 19:12:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.580 19:12:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:53.580 19:12:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:53.580 19:12:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.580 19:12:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.580 ************************************ 00:07:53.580 START TEST accel_xor 00:07:53.580 ************************************ 00:07:53.580 19:12:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:53.580 19:12:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:53.580 [2024-07-22 19:12:12.384397] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:53.580 [2024-07-22 19:12:12.384499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692074 ] 00:07:53.580 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.580 [2024-07-22 19:12:12.502962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.840 [2024-07-22 19:12:12.680811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.101 19:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.016 00:07:56.016 real 0m2.142s 00:07:56.016 user 0m1.971s 00:07:56.016 sys 0m0.184s 00:07:56.016 19:12:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.016 19:12:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:56.016 ************************************ 00:07:56.016 END TEST accel_xor 00:07:56.016 ************************************ 00:07:56.016 19:12:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.016 19:12:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:56.016 19:12:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:56.016 19:12:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.016 19:12:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.016 ************************************ 00:07:56.016 START TEST accel_xor 00:07:56.016 ************************************ 00:07:56.016 19:12:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:56.016 19:12:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:56.016 [2024-07-22 19:12:14.595531] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:56.016 [2024-07-22 19:12:14.595639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692452 ] 00:07:56.016 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.016 [2024-07-22 19:12:14.713332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.016 [2024-07-22 19:12:14.893289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:58.188 19:12:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.188 00:07:58.188 real 0m2.142s 00:07:58.188 user 0m1.969s 00:07:58.188 sys 0m0.186s 00:07:58.188 19:12:16 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.188 19:12:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:58.188 ************************************ 00:07:58.188 END TEST accel_xor 00:07:58.188 ************************************ 00:07:58.188 19:12:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.188 19:12:16 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:58.188 19:12:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:58.189 19:12:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.189 19:12:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.189 ************************************ 00:07:58.189 START TEST accel_dif_verify 00:07:58.189 ************************************ 00:07:58.189 19:12:16 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:58.189 19:12:16 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:58.189 [2024-07-22 19:12:16.825104] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:58.189 [2024-07-22 19:12:16.825287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692827 ] 00:07:58.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.189 [2024-07-22 19:12:16.960596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.449 [2024-07-22 19:12:17.144243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.449 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.450 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:00.360 19:12:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.360 00:08:00.360 real 0m2.177s 00:08:00.360 user 0m1.988s 00:08:00.360 sys 0m0.202s 00:08:00.360 19:12:18 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.360 19:12:18 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:00.360 ************************************ 00:08:00.360 END TEST accel_dif_verify 00:08:00.360 ************************************ 00:08:00.360 19:12:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.360 19:12:18 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:00.360 19:12:18 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:00.360 19:12:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.360 19:12:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.360 ************************************ 00:08:00.360 START TEST accel_dif_generate 00:08:00.360 ************************************ 00:08:00.360 19:12:19 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:00.360 19:12:19 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:00.361 [2024-07-22 19:12:19.060812] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:00.361 [2024-07-22 19:12:19.060926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693434 ] 00:08:00.361 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.361 [2024-07-22 19:12:19.181937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.622 [2024-07-22 19:12:19.361195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.622 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:02.536 19:12:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.536 00:08:02.536 real 0m2.145s 00:08:02.536 user 0m1.971s 00:08:02.536 sys 0m0.188s 00:08:02.536 19:12:21 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.536 19:12:21 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:02.536 ************************************ 00:08:02.536 END TEST accel_dif_generate 00:08:02.536 ************************************ 00:08:02.536 19:12:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:02.536 19:12:21 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:02.536 19:12:21 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:02.536 19:12:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.536 19:12:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.536 ************************************ 00:08:02.536 START TEST accel_dif_generate_copy 00:08:02.536 ************************************ 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:02.536 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:02.536 [2024-07-22 19:12:21.282746] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:02.536 [2024-07-22 19:12:21.282861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693851 ] 00:08:02.536 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.536 [2024-07-22 19:12:21.403068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.797 [2024-07-22 19:12:21.582573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.797 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.798 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.798 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.798 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.798 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.798 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.798 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.798 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.798 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.798 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.710 00:08:04.710 real 0m2.154s 00:08:04.710 user 0m1.979s 00:08:04.710 sys 0m0.188s 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.710 19:12:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:04.710 ************************************ 00:08:04.710 END TEST accel_dif_generate_copy 00:08:04.710 ************************************ 00:08:04.710 19:12:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.710 19:12:23 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:04.710 19:12:23 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:04.710 19:12:23 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:04.710 19:12:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.710 19:12:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.710 ************************************ 00:08:04.710 START TEST accel_comp 00:08:04.710 ************************************ 00:08:04.710 19:12:23 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:04.710 19:12:23 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:04.710 19:12:23 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:04.711 19:12:23 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:04.711 [2024-07-22 19:12:23.494147] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:04.711 [2024-07-22 19:12:23.494258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694211 ] 00:08:04.711 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.711 [2024-07-22 19:12:23.607199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.971 [2024-07-22 19:12:23.784771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.231 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:05.232 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:07.143 19:12:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.143 00:08:07.144 real 0m2.136s 00:08:07.144 user 0m1.976s 00:08:07.144 sys 0m0.174s 00:08:07.144 19:12:25 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.144 19:12:25 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:07.144 ************************************ 00:08:07.144 END TEST accel_comp 00:08:07.144 ************************************ 00:08:07.144 19:12:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:07.144 19:12:25 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:07.144 19:12:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:07.144 19:12:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.144 19:12:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.144 ************************************ 00:08:07.144 START TEST accel_decomp 00:08:07.144 ************************************ 00:08:07.144 19:12:25 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:07.144 19:12:25 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:07.144 [2024-07-22 19:12:25.712704] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:07.144 [2024-07-22 19:12:25.712880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694759 ] 00:08:07.144 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.144 [2024-07-22 19:12:25.834499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.144 [2024-07-22 19:12:26.017946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.404 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.405 19:12:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:09.318 19:12:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.318 00:08:09.318 real 0m2.163s 00:08:09.318 user 0m1.990s 00:08:09.318 sys 0m0.187s 00:08:09.318 19:12:27 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.318 19:12:27 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:09.318 ************************************ 00:08:09.318 END TEST accel_decomp 00:08:09.318 ************************************ 00:08:09.318 19:12:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.318 19:12:27 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.318 19:12:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:09.318 19:12:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.318 19:12:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.318 ************************************ 00:08:09.318 START TEST accel_decomp_full 00:08:09.318 ************************************ 00:08:09.318 19:12:27 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.318 19:12:27 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.319 19:12:27 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.319 19:12:27 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.319 19:12:27 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:09.319 19:12:27 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:09.319 [2024-07-22 19:12:27.943407] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:09.319 [2024-07-22 19:12:27.943512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695255 ] 00:08:09.319 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.319 [2024-07-22 19:12:28.062000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.319 [2024-07-22 19:12:28.241072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.579 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.579 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.579 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.579 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.579 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.579 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.579 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.579 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.580 19:12:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.492 19:12:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.492 19:12:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.492 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.492 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.492 19:12:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.492 19:12:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.492 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:11.493 19:12:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.493 00:08:11.493 real 0m2.168s 00:08:11.493 user 0m1.991s 00:08:11.493 sys 0m0.190s 00:08:11.493 19:12:30 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.493 19:12:30 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:11.493 ************************************ 00:08:11.493 END TEST accel_decomp_full 00:08:11.493 ************************************ 00:08:11.493 19:12:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.493 19:12:30 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.493 19:12:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:11.493 19:12:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.493 19:12:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.493 ************************************ 00:08:11.493 START TEST accel_decomp_mcore 00:08:11.493 ************************************ 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:11.493 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:11.493 [2024-07-22 19:12:30.177738] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:11.493 [2024-07-22 19:12:30.177860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695617 ] 00:08:11.493 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.493 [2024-07-22 19:12:30.296184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.754 [2024-07-22 19:12:30.476861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.754 [2024-07-22 19:12:30.476944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.754 [2024-07-22 19:12:30.477058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.754 [2024-07-22 19:12:30.477085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.754 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.755 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.755 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.755 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.755 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.755 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.755 19:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.701 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.701 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.702 00:08:13.702 real 0m2.167s 00:08:13.702 user 0m6.538s 00:08:13.702 sys 0m0.206s 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.702 19:12:32 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:13.702 ************************************ 00:08:13.702 END TEST accel_decomp_mcore 00:08:13.702 ************************************ 00:08:13.702 19:12:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:13.702 19:12:32 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.702 19:12:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:13.702 19:12:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.702 19:12:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.702 ************************************ 00:08:13.702 START TEST accel_decomp_full_mcore 00:08:13.702 ************************************ 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:13.702 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:13.702 [2024-07-22 19:12:32.419192] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:13.702 [2024-07-22 19:12:32.419313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696152 ] 00:08:13.702 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.702 [2024-07-22 19:12:32.541443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.963 [2024-07-22 19:12:32.723796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.963 [2024-07-22 19:12:32.723880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.963 [2024-07-22 19:12:32.723993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.963 [2024-07-22 19:12:32.724017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.963 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.964 19:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.877 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.877 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.877 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.877 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.877 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.877 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.877 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.877 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.877 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.878 00:08:15.878 real 0m2.193s 00:08:15.878 user 0m6.649s 00:08:15.878 sys 0m0.197s 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.878 19:12:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:15.878 ************************************ 00:08:15.878 END TEST accel_decomp_full_mcore 00:08:15.878 ************************************ 00:08:15.878 19:12:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.878 19:12:34 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.878 19:12:34 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:15.878 19:12:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.878 19:12:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.878 ************************************ 00:08:15.878 START TEST accel_decomp_mthread 00:08:15.878 ************************************ 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:15.878 19:12:34 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:15.878 [2024-07-22 19:12:34.689614] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:15.878 [2024-07-22 19:12:34.689720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696662 ] 00:08:15.878 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.878 [2024-07-22 19:12:34.799666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.139 [2024-07-22 19:12:34.974303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.399 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.400 19:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.314 00:08:18.314 real 0m2.139s 00:08:18.314 user 0m1.978s 00:08:18.314 sys 0m0.176s 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.314 19:12:36 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:18.314 ************************************ 00:08:18.314 END TEST accel_decomp_mthread 00:08:18.314 ************************************ 00:08:18.314 19:12:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.314 19:12:36 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.314 19:12:36 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:18.314 19:12:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.314 19:12:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.314 ************************************ 00:08:18.314 START TEST accel_decomp_full_mthread 00:08:18.314 ************************************ 00:08:18.314 19:12:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.314 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:18.314 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:18.314 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.314 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.314 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.314 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.315 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:18.315 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.315 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.315 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.315 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.315 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.315 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:18.315 19:12:36 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:18.315 [2024-07-22 19:12:36.905965] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:18.315 [2024-07-22 19:12:36.906077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697021 ] 00:08:18.315 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.315 [2024-07-22 19:12:37.025425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.315 [2024-07-22 19:12:37.202855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.576 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.577 19:12:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.493 00:08:20.493 real 0m2.190s 00:08:20.493 user 0m2.014s 00:08:20.493 sys 0m0.191s 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.493 19:12:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:20.493 ************************************ 00:08:20.493 END TEST accel_decomp_full_mthread 00:08:20.493 ************************************ 00:08:20.493 19:12:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.493 19:12:39 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:20.493 19:12:39 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:20.493 19:12:39 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:20.493 19:12:39 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:20.493 19:12:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.493 19:12:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.493 19:12:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.493 19:12:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.493 19:12:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.493 19:12:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.493 19:12:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.493 19:12:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:20.493 19:12:39 accel -- accel/accel.sh@41 -- # jq -r . 00:08:20.493 ************************************ 00:08:20.493 START TEST accel_dif_functional_tests 00:08:20.493 ************************************ 00:08:20.493 19:12:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:20.493 [2024-07-22 19:12:39.200639] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:20.493 [2024-07-22 19:12:39.200742] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697557 ] 00:08:20.493 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.493 [2024-07-22 19:12:39.316565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.754 [2024-07-22 19:12:39.495027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.754 [2024-07-22 19:12:39.495104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.754 [2024-07-22 19:12:39.495108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.015 00:08:21.015 00:08:21.015 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.015 http://cunit.sourceforge.net/ 00:08:21.015 00:08:21.015 00:08:21.015 Suite: accel_dif 00:08:21.015 Test: verify: DIF generated, GUARD check ...passed 00:08:21.015 Test: verify: DIF generated, APPTAG check ...passed 00:08:21.015 Test: verify: DIF generated, REFTAG check ...passed 00:08:21.015 Test: verify: DIF not generated, GUARD check ...[2024-07-22 19:12:39.717005] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:21.015 passed 00:08:21.015 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 19:12:39.717071] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:21.015 passed 00:08:21.015 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 19:12:39.717107] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:21.015 passed 00:08:21.015 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:21.015 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 19:12:39.717185] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:21.015 passed 00:08:21.015 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:21.015 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:21.015 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:21.015 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 19:12:39.717362] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:21.015 passed 00:08:21.015 Test: verify copy: DIF generated, GUARD check ...passed 00:08:21.015 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:21.015 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:21.015 Test: verify copy: DIF not generated, GUARD check ...[2024-07-22 19:12:39.717558] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:21.015 passed 00:08:21.015 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 19:12:39.717603] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:21.015 passed 00:08:21.016 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 19:12:39.717653] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:21.016 passed 00:08:21.016 Test: generate copy: DIF generated, GUARD check ...passed 00:08:21.016 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:21.016 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:21.016 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:21.016 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:21.016 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:21.016 Test: generate copy: iovecs-len validate ...[2024-07-22 19:12:39.717973] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:21.016 passed 00:08:21.016 Test: generate copy: buffer alignment validate ...passed 00:08:21.016 00:08:21.016 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.016 suites 1 1 n/a 0 0 00:08:21.016 tests 26 26 26 0 0 00:08:21.016 asserts 115 115 115 0 n/a 00:08:21.016 00:08:21.016 Elapsed time = 0.003 seconds 00:08:21.959 00:08:21.959 real 0m1.441s 00:08:21.959 user 0m2.771s 00:08:21.959 sys 0m0.224s 00:08:21.959 19:12:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.959 19:12:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:21.959 ************************************ 00:08:21.959 END TEST accel_dif_functional_tests 00:08:21.959 ************************************ 00:08:21.959 19:12:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.959 00:08:21.959 real 0m51.662s 00:08:21.959 user 0m57.234s 00:08:21.959 sys 0m6.166s 00:08:21.959 19:12:40 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.959 19:12:40 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.959 ************************************ 00:08:21.959 END TEST accel 00:08:21.959 ************************************ 00:08:21.959 19:12:40 -- common/autotest_common.sh@1142 -- # return 0 00:08:21.959 19:12:40 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:21.959 19:12:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.959 19:12:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.959 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:21.959 ************************************ 00:08:21.959 START TEST accel_rpc 00:08:21.959 ************************************ 00:08:21.959 19:12:40 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:21.959 * Looking for test storage... 00:08:21.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:21.959 19:12:40 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:21.959 19:12:40 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2697923 00:08:21.959 19:12:40 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2697923 00:08:21.959 19:12:40 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:21.959 19:12:40 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2697923 ']' 00:08:21.959 19:12:40 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.959 19:12:40 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.959 19:12:40 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.959 19:12:40 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.959 19:12:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.959 [2024-07-22 19:12:40.882662] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:21.959 [2024-07-22 19:12:40.882801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697923 ] 00:08:22.220 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.220 [2024-07-22 19:12:41.011910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.480 [2024-07-22 19:12:41.192979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.743 19:12:41 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.743 19:12:41 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:22.743 19:12:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:22.743 19:12:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:22.743 19:12:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:22.743 19:12:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:22.743 19:12:41 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:22.743 19:12:41 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:22.743 19:12:41 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.743 19:12:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.743 ************************************ 00:08:22.743 START TEST accel_assign_opcode 00:08:22.743 ************************************ 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.743 [2024-07-22 19:12:41.658721] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.743 [2024-07-22 19:12:41.666714] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.743 19:12:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:23.315 19:12:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.315 19:12:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:23.315 19:12:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:23.315 19:12:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:23.315 19:12:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.315 19:12:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:23.315 19:12:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.315 software 00:08:23.315 00:08:23.315 real 0m0.607s 00:08:23.315 user 0m0.045s 00:08:23.315 sys 0m0.010s 00:08:23.315 19:12:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.315 19:12:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:23.315 ************************************ 00:08:23.316 END TEST accel_assign_opcode 00:08:23.316 ************************************ 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:23.576 19:12:42 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2697923 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2697923 ']' 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2697923 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2697923 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2697923' 00:08:23.576 killing process with pid 2697923 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@967 -- # kill 2697923 00:08:23.576 19:12:42 accel_rpc -- common/autotest_common.sh@972 -- # wait 2697923 00:08:25.490 00:08:25.490 real 0m3.298s 00:08:25.490 user 0m3.260s 00:08:25.490 sys 0m0.534s 00:08:25.490 19:12:43 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.490 19:12:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.490 ************************************ 00:08:25.490 END TEST accel_rpc 00:08:25.490 ************************************ 00:08:25.490 19:12:44 -- common/autotest_common.sh@1142 -- # return 0 00:08:25.490 19:12:44 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:25.490 19:12:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:25.490 19:12:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.490 19:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:25.490 ************************************ 00:08:25.490 START TEST app_cmdline 00:08:25.490 ************************************ 00:08:25.490 19:12:44 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:25.490 * Looking for test storage... 00:08:25.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:25.490 19:12:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:25.490 19:12:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2698645 00:08:25.490 19:12:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2698645 00:08:25.490 19:12:44 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:25.490 19:12:44 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2698645 ']' 00:08:25.490 19:12:44 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.490 19:12:44 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.490 19:12:44 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.490 19:12:44 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.490 19:12:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.490 [2024-07-22 19:12:44.259468] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:25.490 [2024-07-22 19:12:44.259598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698645 ] 00:08:25.490 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.490 [2024-07-22 19:12:44.385007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.752 [2024-07-22 19:12:44.566152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.324 19:12:45 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.324 19:12:45 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:26.324 19:12:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:26.585 { 00:08:26.585 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:08:26.585 "fields": { 00:08:26.585 "major": 24, 00:08:26.585 "minor": 9, 00:08:26.585 "patch": 0, 00:08:26.585 "suffix": "-pre", 00:08:26.585 "commit": "f7b31b2b9" 00:08:26.585 } 00:08:26.585 } 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.585 request: 00:08:26.585 { 00:08:26.585 "method": "env_dpdk_get_mem_stats", 00:08:26.585 "req_id": 1 00:08:26.585 } 00:08:26.585 Got JSON-RPC error response 00:08:26.585 response: 00:08:26.585 { 00:08:26.585 "code": -32601, 00:08:26.585 "message": "Method not found" 00:08:26.585 } 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:26.585 19:12:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2698645 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2698645 ']' 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2698645 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:26.585 19:12:45 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:26.846 19:12:45 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2698645 00:08:26.846 19:12:45 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:26.846 19:12:45 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:26.846 19:12:45 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2698645' 00:08:26.846 killing process with pid 2698645 00:08:26.846 19:12:45 app_cmdline -- common/autotest_common.sh@967 -- # kill 2698645 00:08:26.846 19:12:45 app_cmdline -- common/autotest_common.sh@972 -- # wait 2698645 00:08:28.758 00:08:28.758 real 0m3.156s 00:08:28.758 user 0m3.366s 00:08:28.758 sys 0m0.539s 00:08:28.758 19:12:47 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.758 19:12:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:28.758 ************************************ 00:08:28.758 END TEST app_cmdline 00:08:28.758 ************************************ 00:08:28.758 19:12:47 -- common/autotest_common.sh@1142 -- # return 0 00:08:28.758 19:12:47 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:28.758 19:12:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:28.758 19:12:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.758 19:12:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.758 ************************************ 00:08:28.758 START TEST version 00:08:28.758 ************************************ 00:08:28.758 19:12:47 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:28.758 * Looking for test storage... 00:08:28.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:28.758 19:12:47 version -- app/version.sh@17 -- # get_header_version major 00:08:28.758 19:12:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.758 19:12:47 version -- app/version.sh@14 -- # cut -f2 00:08:28.758 19:12:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.758 19:12:47 version -- app/version.sh@17 -- # major=24 00:08:28.758 19:12:47 version -- app/version.sh@18 -- # get_header_version minor 00:08:28.758 19:12:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.758 19:12:47 version -- app/version.sh@14 -- # cut -f2 00:08:28.758 19:12:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.758 19:12:47 version -- app/version.sh@18 -- # minor=9 00:08:28.758 19:12:47 version -- app/version.sh@19 -- # get_header_version patch 00:08:28.758 19:12:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.758 19:12:47 version -- app/version.sh@14 -- # cut -f2 00:08:28.758 19:12:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.758 19:12:47 version -- app/version.sh@19 -- # patch=0 00:08:28.758 19:12:47 version -- app/version.sh@20 -- # get_header_version suffix 00:08:28.758 19:12:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.758 19:12:47 version -- app/version.sh@14 -- # cut -f2 00:08:28.758 19:12:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.758 19:12:47 version -- app/version.sh@20 -- # suffix=-pre 00:08:28.758 19:12:47 version -- app/version.sh@22 -- # version=24.9 00:08:28.759 19:12:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:28.759 19:12:47 version -- app/version.sh@28 -- # version=24.9rc0 00:08:28.759 19:12:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:28.759 19:12:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:28.759 19:12:47 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:28.759 19:12:47 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:28.759 00:08:28.759 real 0m0.185s 00:08:28.759 user 0m0.098s 00:08:28.759 sys 0m0.130s 00:08:28.759 19:12:47 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.759 19:12:47 version -- common/autotest_common.sh@10 -- # set +x 00:08:28.759 ************************************ 00:08:28.759 END TEST version 00:08:28.759 ************************************ 00:08:28.759 19:12:47 -- common/autotest_common.sh@1142 -- # return 0 00:08:28.759 19:12:47 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:28.759 19:12:47 -- spdk/autotest.sh@198 -- # uname -s 00:08:28.759 19:12:47 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:28.759 19:12:47 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:28.759 19:12:47 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:28.759 19:12:47 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:28.759 19:12:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:28.759 19:12:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:28.759 19:12:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.759 19:12:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.759 19:12:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:28.759 19:12:47 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:28.759 19:12:47 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:28.759 19:12:47 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:28.759 19:12:47 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:28.759 19:12:47 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:28.759 19:12:47 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:28.759 19:12:47 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.759 19:12:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.759 19:12:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.759 ************************************ 00:08:28.759 START TEST nvmf_tcp 00:08:28.759 ************************************ 00:08:28.759 19:12:47 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:28.759 * Looking for test storage... 00:08:28.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:28.759 19:12:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:29.021 19:12:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:29.021 19:12:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:29.021 19:12:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.021 19:12:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.021 19:12:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.021 ************************************ 00:08:29.021 START TEST nvmf_target_core 00:08:29.021 ************************************ 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:29.021 * Looking for test storage... 00:08:29.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.021 ************************************ 00:08:29.021 START TEST nvmf_abort 00:08:29.021 ************************************ 00:08:29.021 19:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:29.283 * Looking for test storage... 00:08:29.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.283 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.284 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.284 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:35.867 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.867 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:35.867 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:35.867 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:35.867 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:35.867 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:35.867 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:35.867 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:35.868 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:35.868 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:35.868 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:35.868 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.868 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.129 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:36.129 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:36.129 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.129 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.129 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.129 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.129 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:36.129 19:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:36.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:08:36.391 00:08:36.391 --- 10.0.0.2 ping statistics --- 00:08:36.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.391 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:08:36.391 00:08:36.391 --- 10.0.0.1 ping statistics --- 00:08:36.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.391 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2703301 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2703301 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2703301 ']' 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.391 19:12:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.391 [2024-07-22 19:12:55.275308] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:36.391 [2024-07-22 19:12:55.275451] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.652 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.652 [2024-07-22 19:12:55.428045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.913 [2024-07-22 19:12:55.655269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.913 [2024-07-22 19:12:55.655339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.913 [2024-07-22 19:12:55.655353] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.913 [2024-07-22 19:12:55.655364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.913 [2024-07-22 19:12:55.655375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.913 [2024-07-22 19:12:55.655541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.913 [2024-07-22 19:12:55.655663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.913 [2024-07-22 19:12:55.655694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.175 [2024-07-22 19:12:56.065774] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.175 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.437 Malloc0 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.437 Delay0 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.437 [2024-07-22 19:12:56.182039] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.437 19:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:37.437 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.437 [2024-07-22 19:12:56.332406] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:39.985 Initializing NVMe Controllers 00:08:39.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:39.985 controller IO queue size 128 less than required 00:08:39.985 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:39.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:39.985 Initialization complete. Launching workers. 00:08:39.985 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31974 00:08:39.985 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32031, failed to submit 66 00:08:39.985 success 31974, unsuccess 57, failed 0 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:39.985 rmmod nvme_tcp 00:08:39.985 rmmod nvme_fabrics 00:08:39.985 rmmod nvme_keyring 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2703301 ']' 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2703301 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2703301 ']' 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2703301 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2703301 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2703301' 00:08:39.985 killing process with pid 2703301 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2703301 00:08:39.985 19:12:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2703301 00:08:40.557 19:12:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.557 19:12:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.557 19:12:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.557 19:12:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.557 19:12:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.557 19:12:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.557 19:12:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.557 19:12:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.518 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.518 00:08:42.518 real 0m13.446s 00:08:42.518 user 0m14.481s 00:08:42.518 sys 0m6.174s 00:08:42.518 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.518 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.518 ************************************ 00:08:42.518 END TEST nvmf_abort 00:08:42.518 ************************************ 00:08:42.518 19:13:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:42.518 19:13:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:42.518 19:13:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.518 19:13:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.518 19:13:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.518 ************************************ 00:08:42.518 START TEST nvmf_ns_hotplug_stress 00:08:42.518 ************************************ 00:08:42.518 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:42.780 * Looking for test storage... 00:08:42.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.780 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:49.370 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:49.370 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.370 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:49.371 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:49.371 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.371 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.632 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.632 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.632 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.632 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.632 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.632 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.632 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:08:49.893 00:08:49.893 --- 10.0.0.2 ping statistics --- 00:08:49.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.893 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:08:49.893 00:08:49.893 --- 10.0.0.1 ping statistics --- 00:08:49.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.893 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2708243 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2708243 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2708243 ']' 00:08:49.893 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.894 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.894 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.894 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.894 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.894 [2024-07-22 19:13:08.747551] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:49.894 [2024-07-22 19:13:08.747675] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.894 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.154 [2024-07-22 19:13:08.900149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.414 [2024-07-22 19:13:09.126279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.414 [2024-07-22 19:13:09.126357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.415 [2024-07-22 19:13:09.126373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.415 [2024-07-22 19:13:09.126384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.415 [2024-07-22 19:13:09.126396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.415 [2024-07-22 19:13:09.126565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.415 [2024-07-22 19:13:09.126718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.415 [2024-07-22 19:13:09.126748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.676 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.676 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:50.676 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.676 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.676 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.676 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.676 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:50.676 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.937 [2024-07-22 19:13:09.674259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.937 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:50.937 19:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.198 [2024-07-22 19:13:10.027880] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.198 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.460 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:51.460 Malloc0 00:08:51.721 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:51.721 Delay0 00:08:51.721 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.982 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:51.982 NULL1 00:08:51.982 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:52.243 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:52.243 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2708713 00:08:52.243 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:52.243 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.243 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.505 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.505 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:52.505 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:52.765 [2024-07-22 19:13:11.554190] bdev.c:5060:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:08:52.765 true 00:08:52.765 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:52.765 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.026 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.026 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:53.026 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:53.287 true 00:08:53.287 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:53.287 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.548 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.548 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:53.548 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:53.809 true 00:08:53.809 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:53.809 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.070 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.070 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:54.070 19:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:54.331 true 00:08:54.331 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:54.331 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.591 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.591 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:54.591 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:54.852 true 00:08:54.852 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:54.852 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.114 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.114 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:55.114 19:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:55.375 true 00:08:55.375 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:55.375 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.636 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.636 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:55.636 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:55.896 true 00:08:55.896 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:55.896 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.896 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.157 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:56.157 19:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:56.418 true 00:08:56.418 19:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:56.418 19:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.418 19:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.679 19:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:56.679 19:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:56.940 true 00:08:56.941 19:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:56.941 19:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.941 19:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.202 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:57.202 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:57.202 true 00:08:57.462 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:57.462 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.462 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.723 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:57.723 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:57.723 true 00:08:57.723 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:57.723 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.985 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.247 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:58.247 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:58.247 true 00:08:58.247 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:58.247 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.507 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.767 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:58.767 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:58.767 true 00:08:58.767 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:58.767 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.027 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.287 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:59.287 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:59.287 true 00:08:59.287 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:59.287 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.548 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.809 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:59.809 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:59.809 true 00:08:59.809 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:08:59.809 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.069 19:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.329 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:00.329 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:00.329 true 00:09:00.329 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:00.329 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.590 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.850 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:00.850 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:00.850 true 00:09:00.850 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:00.850 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.111 19:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.111 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:01.111 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:01.374 true 00:09:01.374 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:01.374 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.635 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.635 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:01.635 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:01.895 true 00:09:01.895 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:01.895 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.156 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.156 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:02.156 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:02.417 true 00:09:02.417 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:02.417 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.678 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.678 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:02.678 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:02.939 true 00:09:02.939 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:02.939 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.199 19:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.199 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:03.200 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:03.459 true 00:09:03.459 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:03.459 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.755 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.755 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:03.755 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:04.042 true 00:09:04.042 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:04.042 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.042 19:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.302 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:04.302 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:04.302 true 00:09:04.302 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:04.302 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.561 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.821 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:04.821 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:04.821 true 00:09:04.821 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:04.821 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.082 19:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.342 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:05.342 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:05.342 true 00:09:05.342 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:05.342 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.603 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.863 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:05.863 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:05.863 true 00:09:05.863 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:05.863 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.124 19:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.383 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:06.383 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:06.383 true 00:09:06.383 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:06.383 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.643 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.643 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:06.643 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:06.904 true 00:09:06.904 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:06.904 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.165 19:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.165 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:07.165 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:07.425 true 00:09:07.425 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:07.425 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.686 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.686 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:07.686 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:07.946 true 00:09:07.946 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:07.946 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.207 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.207 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:08.207 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:08.467 true 00:09:08.467 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:08.467 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.727 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.727 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:08.727 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:08.987 true 00:09:08.987 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:08.987 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.248 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.248 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:09.248 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:09.508 true 00:09:09.508 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:09.508 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.769 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.769 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:09.769 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:10.029 true 00:09:10.029 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:10.029 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.291 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.291 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:10.291 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:10.552 true 00:09:10.552 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:10.552 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.814 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.814 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:10.814 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:11.077 true 00:09:11.077 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:11.077 19:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.077 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.338 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:11.338 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:11.599 true 00:09:11.599 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:11.599 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.599 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.859 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:11.859 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:12.121 true 00:09:12.121 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:12.121 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.121 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.381 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:12.381 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:12.641 true 00:09:12.641 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:12.641 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.641 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.902 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:12.902 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:13.163 true 00:09:13.163 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:13.163 19:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.163 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.424 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:09:13.424 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:13.424 true 00:09:13.686 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:13.686 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.686 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.947 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:09:13.947 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:13.947 true 00:09:14.207 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:14.208 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.208 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.469 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:09:14.469 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:09:14.469 true 00:09:14.469 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:14.469 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.729 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.990 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:09:14.990 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:09:14.990 true 00:09:14.990 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:14.990 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.251 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.512 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:09:15.512 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:09:15.512 true 00:09:15.512 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:15.512 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.773 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.034 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:09:16.034 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:16.034 true 00:09:16.034 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:16.034 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.294 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.556 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:09:16.556 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:16.556 true 00:09:16.556 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:16.556 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.816 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.077 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:09:17.077 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:17.077 true 00:09:17.077 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:17.077 19:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.338 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.601 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:09:17.601 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:09:17.601 true 00:09:17.601 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:17.601 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.864 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.864 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:09:17.864 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:09:18.129 true 00:09:18.129 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:18.129 19:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.390 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.390 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:09:18.390 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:09:18.651 true 00:09:18.651 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:18.651 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.911 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.912 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:09:18.912 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:09:19.172 true 00:09:19.172 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:19.172 19:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.432 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.432 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:09:19.432 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:09:19.694 true 00:09:19.694 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:19.694 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.958 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.958 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:09:19.958 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:09:20.219 true 00:09:20.219 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:20.219 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.480 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.480 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:09:20.480 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:09:20.741 true 00:09:20.741 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:20.741 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.001 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.001 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:09:21.002 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:09:21.262 true 00:09:21.262 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:21.262 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.523 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.523 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:09:21.523 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:09:21.784 true 00:09:21.784 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:21.784 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.045 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.045 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:09:22.045 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:09:22.306 true 00:09:22.306 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:22.306 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.306 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.576 Initializing NVMe Controllers 00:09:22.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:22.576 Controller IO queue size 128, less than required. 00:09:22.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:22.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:22.576 Initialization complete. Launching workers. 00:09:22.576 ======================================================== 00:09:22.576 Latency(us) 00:09:22.577 Device Information : IOPS MiB/s Average min max 00:09:22.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27501.91 13.43 4654.18 1921.32 11615.43 00:09:22.577 ======================================================== 00:09:22.577 Total : 27501.91 13.43 4654.18 1921.32 11615.43 00:09:22.577 00:09:22.577 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:09:22.577 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:09:22.885 true 00:09:22.885 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2708713 00:09:22.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2708713) - No such process 00:09:22.885 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2708713 00:09:22.885 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.885 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:23.167 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:23.167 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:23.167 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:23.167 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.167 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:23.167 null0 00:09:23.167 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.167 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.167 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:23.428 null1 00:09:23.428 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.428 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.428 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:23.689 null2 00:09:23.689 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.689 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.689 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:23.689 null3 00:09:23.689 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.689 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.689 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:23.950 null4 00:09:23.950 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.950 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.950 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:23.950 null5 00:09:24.211 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:24.211 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.211 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:24.211 null6 00:09:24.211 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:24.211 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.211 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:24.472 null7 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.472 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2715268 2715269 2715271 2715274 2715276 2715278 2715280 2715282 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:24.473 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.733 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:24.994 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:24.995 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:25.256 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.256 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.256 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:25.256 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.256 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.256 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.256 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.517 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:25.518 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.779 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:26.041 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.042 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.304 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.566 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.828 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:27.089 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:27.089 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.089 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.089 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:27.089 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.089 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.089 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:27.089 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.089 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.089 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.350 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.612 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:27.873 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.133 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.133 rmmod nvme_tcp 00:09:28.133 rmmod nvme_fabrics 00:09:28.133 rmmod nvme_keyring 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2708243 ']' 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2708243 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2708243 ']' 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2708243 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2708243 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2708243' 00:09:28.133 killing process with pid 2708243 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2708243 00:09:28.133 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2708243 00:09:29.075 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:29.075 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:29.075 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:29.075 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.075 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:29.075 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.075 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.075 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.989 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.989 00:09:30.989 real 0m48.368s 00:09:30.989 user 3m15.797s 00:09:30.989 sys 0m16.743s 00:09:30.989 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.989 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.989 ************************************ 00:09:30.989 END TEST nvmf_ns_hotplug_stress 00:09:30.989 ************************************ 00:09:30.989 19:13:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:30.989 19:13:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:30.989 19:13:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.989 19:13:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.989 19:13:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.989 ************************************ 00:09:30.989 START TEST nvmf_delete_subsystem 00:09:30.989 ************************************ 00:09:30.990 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:31.251 * Looking for test storage... 00:09:31.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.251 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.251 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:31.252 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:37.845 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:37.845 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:37.845 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:37.845 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:37.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:09:37.845 00:09:37.845 --- 10.0.0.2 ping statistics --- 00:09:37.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.845 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:09:37.845 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:09:37.845 00:09:37.845 --- 10.0.0.1 ping statistics --- 00:09:37.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.845 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2720430 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2720430 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2720430 ']' 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.846 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.846 [2024-07-22 19:13:56.764601] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:37.846 [2024-07-22 19:13:56.764692] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.106 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.106 [2024-07-22 19:13:56.885431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.366 [2024-07-22 19:13:57.062371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.366 [2024-07-22 19:13:57.062417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.366 [2024-07-22 19:13:57.062430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.366 [2024-07-22 19:13:57.062440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.366 [2024-07-22 19:13:57.062450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.366 [2024-07-22 19:13:57.062602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.366 [2024-07-22 19:13:57.062631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.627 [2024-07-22 19:13:57.540069] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.627 [2024-07-22 19:13:57.564427] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.627 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.627 NULL1 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.888 Delay0 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2720588 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:38.888 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:38.888 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.888 [2024-07-22 19:13:57.701772] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:40.800 19:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.800 19:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.800 19:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 starting I/O failed: -6 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 [2024-07-22 19:13:59.789025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026000 is same with the state(5) to be set 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.061 Read completed with error (sct=0, sc=8) 00:09:41.061 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 starting I/O failed: -6 00:09:41.062 [2024-07-22 19:13:59.793372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(5) to be set 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 Write completed with error (sct=0, sc=8) 00:09:41.062 Read completed with error (sct=0, sc=8) 00:09:41.062 [2024-07-22 19:13:59.794133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030a00 is same with the state(5) to be set 00:09:42.005 [2024-07-22 19:14:00.759154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025600 is same with the state(5) to be set 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 [2024-07-22 19:14:00.792936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025b00 is same with the state(5) to be set 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 [2024-07-22 19:14:00.793416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026500 is same with the state(5) to be set 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 [2024-07-22 19:14:00.795261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030f00 is same with the state(5) to be set 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 Write completed with error (sct=0, sc=8) 00:09:42.005 Read completed with error (sct=0, sc=8) 00:09:42.005 [2024-07-22 19:14:00.795784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(5) to be set 00:09:42.005 19:14:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.005 19:14:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:42.005 Initializing NVMe Controllers 00:09:42.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:42.005 Controller IO queue size 128, less than required. 00:09:42.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:42.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:42.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:42.005 Initialization complete. Launching workers. 00:09:42.005 ======================================================== 00:09:42.005 Latency(us) 00:09:42.005 Device Information : IOPS MiB/s Average min max 00:09:42.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.31 0.08 902203.14 380.52 1006954.41 00:09:42.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.33 0.08 912659.02 784.31 1010535.17 00:09:42.005 ======================================================== 00:09:42.005 Total : 329.63 0.16 907352.11 380.52 1010535.17 00:09:42.005 00:09:42.005 19:14:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2720588 00:09:42.005 19:14:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:42.005 [2024-07-22 19:14:00.798615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025600 (9): Bad file descriptor 00:09:42.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2720588 00:09:42.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2720588) - No such process 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2720588 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2720588 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2720588 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.576 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.577 [2024-07-22 19:14:01.326276] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2721423 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2721423 00:09:42.577 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:42.577 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.577 [2024-07-22 19:14:01.436552] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:43.147 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.147 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2721423 00:09:43.147 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:43.408 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.408 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2721423 00:09:43.408 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:43.979 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.979 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2721423 00:09:43.979 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.550 19:14:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:44.550 19:14:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2721423 00:09:44.550 19:14:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:45.120 19:14:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:45.120 19:14:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2721423 00:09:45.120 19:14:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:45.691 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:45.691 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2721423 00:09:45.691 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:45.953 Initializing NVMe Controllers 00:09:45.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:45.953 Controller IO queue size 128, less than required. 00:09:45.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:45.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:45.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:45.953 Initialization complete. Launching workers. 00:09:45.953 ======================================================== 00:09:45.953 Latency(us) 00:09:45.953 Device Information : IOPS MiB/s Average min max 00:09:45.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002843.88 1000350.69 1041355.92 00:09:45.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003340.12 1000298.99 1042578.10 00:09:45.953 ======================================================== 00:09:45.953 Total : 256.00 0.12 1003092.00 1000298.99 1042578.10 00:09:45.953 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2721423 00:09:45.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2721423) - No such process 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2721423 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.953 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.953 rmmod nvme_tcp 00:09:45.953 rmmod nvme_fabrics 00:09:46.214 rmmod nvme_keyring 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2720430 ']' 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2720430 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2720430 ']' 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2720430 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.214 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2720430 00:09:46.214 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.214 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.214 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2720430' 00:09:46.214 killing process with pid 2720430 00:09:46.214 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2720430 00:09:46.214 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2720430 00:09:47.245 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.245 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.245 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.245 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.245 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.245 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.245 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.245 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.161 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.161 00:09:49.161 real 0m18.052s 00:09:49.161 user 0m31.398s 00:09:49.161 sys 0m6.012s 00:09:49.161 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.161 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.161 ************************************ 00:09:49.161 END TEST nvmf_delete_subsystem 00:09:49.161 ************************************ 00:09:49.161 19:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:49.161 19:14:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:49.161 19:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:49.161 19:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.161 19:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.161 ************************************ 00:09:49.161 START TEST nvmf_host_management 00:09:49.161 ************************************ 00:09:49.161 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:49.161 * Looking for test storage... 00:09:49.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.422 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.423 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:56.014 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:56.014 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:56.014 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:56.014 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.014 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:56.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:09:56.276 00:09:56.276 --- 10.0.0.2 ping statistics --- 00:09:56.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.276 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:09:56.276 00:09:56.276 --- 10.0.0.1 ping statistics --- 00:09:56.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.276 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.276 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2726399 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2726399 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2726399 ']' 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.537 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:56.537 [2024-07-22 19:14:15.360164] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:56.537 [2024-07-22 19:14:15.360295] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.537 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.797 [2024-07-22 19:14:15.514246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.058 [2024-07-22 19:14:15.756011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.058 [2024-07-22 19:14:15.756075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.058 [2024-07-22 19:14:15.756090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.058 [2024-07-22 19:14:15.756101] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.058 [2024-07-22 19:14:15.756113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.058 [2024-07-22 19:14:15.756285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.058 [2024-07-22 19:14:15.756492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.058 [2024-07-22 19:14:15.756600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.058 [2024-07-22 19:14:15.756627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.319 [2024-07-22 19:14:16.150521] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.319 Malloc0 00:09:57.319 [2024-07-22 19:14:16.250946] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:57.319 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2726528 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2726528 /var/tmp/bdevperf.sock 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2726528 ']' 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:57.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:57.580 { 00:09:57.580 "params": { 00:09:57.580 "name": "Nvme$subsystem", 00:09:57.580 "trtype": "$TEST_TRANSPORT", 00:09:57.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.580 "adrfam": "ipv4", 00:09:57.580 "trsvcid": "$NVMF_PORT", 00:09:57.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.580 "hdgst": ${hdgst:-false}, 00:09:57.580 "ddgst": ${ddgst:-false} 00:09:57.580 }, 00:09:57.580 "method": "bdev_nvme_attach_controller" 00:09:57.580 } 00:09:57.580 EOF 00:09:57.580 )") 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:57.580 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:57.581 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:57.581 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:57.581 "params": { 00:09:57.581 "name": "Nvme0", 00:09:57.581 "trtype": "tcp", 00:09:57.581 "traddr": "10.0.0.2", 00:09:57.581 "adrfam": "ipv4", 00:09:57.581 "trsvcid": "4420", 00:09:57.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:57.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:57.581 "hdgst": false, 00:09:57.581 "ddgst": false 00:09:57.581 }, 00:09:57.581 "method": "bdev_nvme_attach_controller" 00:09:57.581 }' 00:09:57.581 [2024-07-22 19:14:16.385236] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:57.581 [2024-07-22 19:14:16.385342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726528 ] 00:09:57.581 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.581 [2024-07-22 19:14:16.496527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.841 [2024-07-22 19:14:16.674317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.413 Running I/O for 10 seconds... 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:58.413 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=463 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 463 -ge 100 ']' 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.675 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:58.675 [2024-07-22 19:14:17.543163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.675 [2024-07-22 19:14:17.543405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 [2024-07-22 19:14:17.543773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:09:58.676 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.676 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:58.676 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.676 [2024-07-22 19:14:17.548944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:58.676 [2024-07-22 19:14:17.548997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:58.676 [2024-07-22 19:14:17.549024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:58.676 [2024-07-22 19:14:17.549047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:58.676 [2024-07-22 19:14:17.549070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:09:58.676 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:58.676 [2024-07-22 19:14:17.549131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.676 [2024-07-22 19:14:17.549465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.676 [2024-07-22 19:14:17.549477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.549980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.549993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.677 [2024-07-22 19:14:17.550378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.677 [2024-07-22 19:14:17.550390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.678 [2024-07-22 19:14:17.550667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.678 [2024-07-22 19:14:17.550892] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000389080 was disconnected and freed. reset controller. 00:09:58.678 [2024-07-22 19:14:17.552185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:58.678 task offset: 67456 on job bdev=Nvme0n1 fails 00:09:58.678 00:09:58.678 Latency(us) 00:09:58.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.678 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:58.678 Job: Nvme0n1 ended in about 0.44 seconds with error 00:09:58.678 Verification LBA range: start 0x0 length 0x400 00:09:58.678 Nvme0n1 : 0.44 1203.39 75.21 146.14 0.00 46001.55 2375.68 37792.43 00:09:58.678 =================================================================================================================== 00:09:58.678 Total : 1203.39 75.21 146.14 0.00 46001.55 2375.68 37792.43 00:09:58.678 [2024-07-22 19:14:17.556493] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.678 [2024-07-22 19:14:17.556527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:09:58.678 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.678 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:58.678 [2024-07-22 19:14:17.577975] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.620 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2726528 00:09:59.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2726528) - No such process 00:09:59.620 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:59.620 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:59.620 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:59.620 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:59.620 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:59.620 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:59.620 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:59.620 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:59.620 { 00:09:59.620 "params": { 00:09:59.620 "name": "Nvme$subsystem", 00:09:59.620 "trtype": "$TEST_TRANSPORT", 00:09:59.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.620 "adrfam": "ipv4", 00:09:59.620 "trsvcid": "$NVMF_PORT", 00:09:59.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.620 "hdgst": ${hdgst:-false}, 00:09:59.620 "ddgst": ${ddgst:-false} 00:09:59.620 }, 00:09:59.620 "method": "bdev_nvme_attach_controller" 00:09:59.620 } 00:09:59.620 EOF 00:09:59.620 )") 00:09:59.882 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:59.882 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:59.882 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:59.882 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:59.882 "params": { 00:09:59.882 "name": "Nvme0", 00:09:59.882 "trtype": "tcp", 00:09:59.882 "traddr": "10.0.0.2", 00:09:59.882 "adrfam": "ipv4", 00:09:59.882 "trsvcid": "4420", 00:09:59.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:59.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:59.882 "hdgst": false, 00:09:59.882 "ddgst": false 00:09:59.882 }, 00:09:59.882 "method": "bdev_nvme_attach_controller" 00:09:59.882 }' 00:09:59.882 [2024-07-22 19:14:18.643730] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:59.882 [2024-07-22 19:14:18.643844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727074 ] 00:09:59.882 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.882 [2024-07-22 19:14:18.757831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.142 [2024-07-22 19:14:18.937052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.403 Running I/O for 1 seconds... 00:10:01.789 00:10:01.789 Latency(us) 00:10:01.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.789 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:01.789 Verification LBA range: start 0x0 length 0x400 00:10:01.789 Nvme0n1 : 1.03 1365.09 85.32 0.00 0.00 46028.33 7809.71 41724.59 00:10:01.789 =================================================================================================================== 00:10:01.789 Total : 1365.09 85.32 0.00 0.00 46028.33 7809.71 41724.59 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:02.360 rmmod nvme_tcp 00:10:02.360 rmmod nvme_fabrics 00:10:02.360 rmmod nvme_keyring 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2726399 ']' 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2726399 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2726399 ']' 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2726399 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2726399 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2726399' 00:10:02.360 killing process with pid 2726399 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2726399 00:10:02.360 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2726399 00:10:02.932 [2024-07-22 19:14:21.859114] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:03.193 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.193 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.193 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.193 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.193 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.193 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.193 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.193 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.106 19:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.106 19:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:05.106 00:10:05.106 real 0m15.964s 00:10:05.106 user 0m30.117s 00:10:05.106 sys 0m6.542s 00:10:05.106 19:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.106 19:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.106 ************************************ 00:10:05.106 END TEST nvmf_host_management 00:10:05.106 ************************************ 00:10:05.106 19:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:05.106 19:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:05.106 19:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:05.106 19:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.106 19:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 ************************************ 00:10:05.368 START TEST nvmf_lvol 00:10:05.368 ************************************ 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:05.368 * Looking for test storage... 00:10:05.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.368 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:13.514 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:13.514 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:13.514 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:13.514 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:13.514 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:13.514 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.514 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.514 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:13.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:10:13.515 00:10:13.515 --- 10.0.0.2 ping statistics --- 00:10:13.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.515 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:10:13.515 00:10:13.515 --- 10.0.0.1 ping statistics --- 00:10:13.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.515 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2731887 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2731887 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2731887 ']' 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.515 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.515 [2024-07-22 19:14:31.439572] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:13.515 [2024-07-22 19:14:31.439696] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.515 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.515 [2024-07-22 19:14:31.573238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:13.515 [2024-07-22 19:14:31.755154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.515 [2024-07-22 19:14:31.755195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.515 [2024-07-22 19:14:31.755214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.515 [2024-07-22 19:14:31.755224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.515 [2024-07-22 19:14:31.755234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.515 [2024-07-22 19:14:31.755416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.515 [2024-07-22 19:14:31.755568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.515 [2024-07-22 19:14:31.755572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.515 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.515 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:10:13.515 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.515 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.515 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.515 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.515 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.515 [2024-07-22 19:14:32.367085] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.515 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.774 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:13.774 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.034 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:14.034 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:14.295 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:14.295 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6f6dc1ec-9da8-4944-a31b-0faf0567763e 00:10:14.295 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f6dc1ec-9da8-4944-a31b-0faf0567763e lvol 20 00:10:14.553 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=66b130f2-01cb-47e1-815b-6dcff8357f41 00:10:14.553 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:14.813 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 66b130f2-01cb-47e1-815b-6dcff8357f41 00:10:14.813 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:15.073 [2024-07-22 19:14:33.865442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.073 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:15.334 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2732421 00:10:15.334 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:15.334 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:15.334 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.275 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 66b130f2-01cb-47e1-815b-6dcff8357f41 MY_SNAPSHOT 00:10:16.535 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7a295089-5895-44f1-a59b-560d83c8986a 00:10:16.535 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 66b130f2-01cb-47e1-815b-6dcff8357f41 30 00:10:16.535 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7a295089-5895-44f1-a59b-560d83c8986a MY_CLONE 00:10:16.796 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a10ae5fd-e118-47a6-bc99-a22e785266be 00:10:16.796 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a10ae5fd-e118-47a6-bc99-a22e785266be 00:10:17.414 19:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2732421 00:10:25.551 Initializing NVMe Controllers 00:10:25.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:25.551 Controller IO queue size 128, less than required. 00:10:25.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:25.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:25.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:25.551 Initialization complete. Launching workers. 00:10:25.551 ======================================================== 00:10:25.551 Latency(us) 00:10:25.551 Device Information : IOPS MiB/s Average min max 00:10:25.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16166.80 63.15 7918.00 497.48 111229.51 00:10:25.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11552.80 45.13 11082.90 4056.99 122504.26 00:10:25.551 ======================================================== 00:10:25.551 Total : 27719.60 108.28 9237.05 497.48 122504.26 00:10:25.551 00:10:25.551 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:25.813 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 66b130f2-01cb-47e1-815b-6dcff8357f41 00:10:25.813 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f6dc1ec-9da8-4944-a31b-0faf0567763e 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:26.074 rmmod nvme_tcp 00:10:26.074 rmmod nvme_fabrics 00:10:26.074 rmmod nvme_keyring 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2731887 ']' 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2731887 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2731887 ']' 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2731887 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:26.074 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2731887 00:10:26.074 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:26.074 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:26.074 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2731887' 00:10:26.074 killing process with pid 2731887 00:10:26.075 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2731887 00:10:26.075 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2731887 00:10:27.461 19:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.461 19:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.461 19:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.461 19:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.461 19:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.461 19:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.461 19:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.461 19:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.377 00:10:29.377 real 0m24.084s 00:10:29.377 user 1m5.488s 00:10:29.377 sys 0m7.836s 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:29.377 ************************************ 00:10:29.377 END TEST nvmf_lvol 00:10:29.377 ************************************ 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.377 ************************************ 00:10:29.377 START TEST nvmf_lvs_grow 00:10:29.377 ************************************ 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:29.377 * Looking for test storage... 00:10:29.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.377 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:29.639 19:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:36.229 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:36.229 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:36.229 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.229 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:36.230 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.230 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:10:36.491 00:10:36.491 --- 10.0.0.2 ping statistics --- 00:10:36.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.491 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:10:36.491 00:10:36.491 --- 10.0.0.1 ping statistics --- 00:10:36.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.491 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:36.491 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2738941 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2738941 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2738941 ']' 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:36.753 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:36.753 [2024-07-22 19:14:55.565170] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:36.753 [2024-07-22 19:14:55.565305] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.753 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.753 [2024-07-22 19:14:55.699382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.014 [2024-07-22 19:14:55.880613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.014 [2024-07-22 19:14:55.880659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.014 [2024-07-22 19:14:55.880671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.014 [2024-07-22 19:14:55.880680] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.014 [2024-07-22 19:14:55.880690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.014 [2024-07-22 19:14:55.880717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:37.587 [2024-07-22 19:14:56.469053] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.587 ************************************ 00:10:37.587 START TEST lvs_grow_clean 00:10:37.587 ************************************ 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:37.587 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:37.849 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:37.849 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:38.110 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=942424e5-0738-4556-8e2e-126a22cf204b 00:10:38.110 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:38.110 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:38.110 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:38.110 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:38.110 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 942424e5-0738-4556-8e2e-126a22cf204b lvol 150 00:10:38.371 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d8ec2bac-40ef-4ecb-904d-b93060ff0024 00:10:38.371 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:38.371 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:38.371 [2024-07-22 19:14:57.309936] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:38.371 [2024-07-22 19:14:57.310010] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:38.371 true 00:10:38.637 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:38.637 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:38.637 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:38.637 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:38.903 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d8ec2bac-40ef-4ecb-904d-b93060ff0024 00:10:38.903 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:39.164 [2024-07-22 19:14:57.931961] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.164 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2739463 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2739463 /var/tmp/bdevperf.sock 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2739463 ']' 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:39.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:39.164 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:39.425 [2024-07-22 19:14:58.172621] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:39.425 [2024-07-22 19:14:58.172731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739463 ] 00:10:39.425 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.425 [2024-07-22 19:14:58.301043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.686 [2024-07-22 19:14:58.476582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.257 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:40.257 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:10:40.257 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:40.518 Nvme0n1 00:10:40.518 19:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:40.518 [ 00:10:40.518 { 00:10:40.518 "name": "Nvme0n1", 00:10:40.518 "aliases": [ 00:10:40.518 "d8ec2bac-40ef-4ecb-904d-b93060ff0024" 00:10:40.518 ], 00:10:40.518 "product_name": "NVMe disk", 00:10:40.518 "block_size": 4096, 00:10:40.518 "num_blocks": 38912, 00:10:40.518 "uuid": "d8ec2bac-40ef-4ecb-904d-b93060ff0024", 00:10:40.518 "assigned_rate_limits": { 00:10:40.518 "rw_ios_per_sec": 0, 00:10:40.518 "rw_mbytes_per_sec": 0, 00:10:40.518 "r_mbytes_per_sec": 0, 00:10:40.518 "w_mbytes_per_sec": 0 00:10:40.518 }, 00:10:40.518 "claimed": false, 00:10:40.518 "zoned": false, 00:10:40.518 "supported_io_types": { 00:10:40.518 "read": true, 00:10:40.518 "write": true, 00:10:40.518 "unmap": true, 00:10:40.518 "flush": true, 00:10:40.518 "reset": true, 00:10:40.518 "nvme_admin": true, 00:10:40.518 "nvme_io": true, 00:10:40.518 "nvme_io_md": false, 00:10:40.518 "write_zeroes": true, 00:10:40.518 "zcopy": false, 00:10:40.518 "get_zone_info": false, 00:10:40.518 "zone_management": false, 00:10:40.518 "zone_append": false, 00:10:40.518 "compare": true, 00:10:40.518 "compare_and_write": true, 00:10:40.518 "abort": true, 00:10:40.518 "seek_hole": false, 00:10:40.518 "seek_data": false, 00:10:40.518 "copy": true, 00:10:40.518 "nvme_iov_md": false 00:10:40.518 }, 00:10:40.518 "memory_domains": [ 00:10:40.518 { 00:10:40.518 "dma_device_id": "system", 00:10:40.518 "dma_device_type": 1 00:10:40.518 } 00:10:40.518 ], 00:10:40.518 "driver_specific": { 00:10:40.518 "nvme": [ 00:10:40.518 { 00:10:40.518 "trid": { 00:10:40.518 "trtype": "TCP", 00:10:40.518 "adrfam": "IPv4", 00:10:40.518 "traddr": "10.0.0.2", 00:10:40.518 "trsvcid": "4420", 00:10:40.518 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:40.518 }, 00:10:40.518 "ctrlr_data": { 00:10:40.518 "cntlid": 1, 00:10:40.518 "vendor_id": "0x8086", 00:10:40.518 "model_number": "SPDK bdev Controller", 00:10:40.518 "serial_number": "SPDK0", 00:10:40.518 "firmware_revision": "24.09", 00:10:40.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:40.518 "oacs": { 00:10:40.518 "security": 0, 00:10:40.518 "format": 0, 00:10:40.518 "firmware": 0, 00:10:40.518 "ns_manage": 0 00:10:40.518 }, 00:10:40.518 "multi_ctrlr": true, 00:10:40.518 "ana_reporting": false 00:10:40.518 }, 00:10:40.519 "vs": { 00:10:40.519 "nvme_version": "1.3" 00:10:40.519 }, 00:10:40.519 "ns_data": { 00:10:40.519 "id": 1, 00:10:40.519 "can_share": true 00:10:40.519 } 00:10:40.519 } 00:10:40.519 ], 00:10:40.519 "mp_policy": "active_passive" 00:10:40.519 } 00:10:40.519 } 00:10:40.519 ] 00:10:40.519 19:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2739669 00:10:40.519 19:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:40.519 19:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:40.780 Running I/O for 10 seconds... 00:10:41.722 Latency(us) 00:10:41.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.722 Nvme0n1 : 1.00 16320.00 63.75 0.00 0.00 0.00 0.00 0.00 00:10:41.722 =================================================================================================================== 00:10:41.722 Total : 16320.00 63.75 0.00 0.00 0.00 0.00 0.00 00:10:41.722 00:10:42.665 19:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:42.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.665 Nvme0n1 : 2.00 16408.50 64.10 0.00 0.00 0.00 0.00 0.00 00:10:42.665 =================================================================================================================== 00:10:42.665 Total : 16408.50 64.10 0.00 0.00 0.00 0.00 0.00 00:10:42.665 00:10:42.665 true 00:10:42.665 19:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:42.665 19:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:42.925 19:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:42.925 19:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:42.925 19:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2739669 00:10:43.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.866 Nvme0n1 : 3.00 16459.33 64.29 0.00 0.00 0.00 0.00 0.00 00:10:43.866 =================================================================================================================== 00:10:43.866 Total : 16459.33 64.29 0.00 0.00 0.00 0.00 0.00 00:10:43.866 00:10:44.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.808 Nvme0n1 : 4.00 16486.50 64.40 0.00 0.00 0.00 0.00 0.00 00:10:44.808 =================================================================================================================== 00:10:44.808 Total : 16486.50 64.40 0.00 0.00 0.00 0.00 0.00 00:10:44.808 00:10:45.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.749 Nvme0n1 : 5.00 16516.60 64.52 0.00 0.00 0.00 0.00 0.00 00:10:45.749 =================================================================================================================== 00:10:45.749 Total : 16516.60 64.52 0.00 0.00 0.00 0.00 0.00 00:10:45.749 00:10:46.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.725 Nvme0n1 : 6.00 16536.67 64.60 0.00 0.00 0.00 0.00 0.00 00:10:46.725 =================================================================================================================== 00:10:46.725 Total : 16536.67 64.60 0.00 0.00 0.00 0.00 0.00 00:10:46.725 00:10:47.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.695 Nvme0n1 : 7.00 16560.00 64.69 0.00 0.00 0.00 0.00 0.00 00:10:47.695 =================================================================================================================== 00:10:47.695 Total : 16560.00 64.69 0.00 0.00 0.00 0.00 0.00 00:10:47.695 00:10:48.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.636 Nvme0n1 : 8.00 16569.62 64.73 0.00 0.00 0.00 0.00 0.00 00:10:48.636 =================================================================================================================== 00:10:48.636 Total : 16569.62 64.73 0.00 0.00 0.00 0.00 0.00 00:10:48.636 00:10:49.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.576 Nvme0n1 : 9.00 16584.22 64.78 0.00 0.00 0.00 0.00 0.00 00:10:49.576 =================================================================================================================== 00:10:49.576 Total : 16584.22 64.78 0.00 0.00 0.00 0.00 0.00 00:10:49.576 00:10:50.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.960 Nvme0n1 : 10.00 16596.00 64.83 0.00 0.00 0.00 0.00 0.00 00:10:50.960 =================================================================================================================== 00:10:50.960 Total : 16596.00 64.83 0.00 0.00 0.00 0.00 0.00 00:10:50.960 00:10:50.960 00:10:50.960 Latency(us) 00:10:50.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.960 Nvme0n1 : 10.00 16594.89 64.82 0.00 0.00 7708.46 2484.91 13707.95 00:10:50.960 =================================================================================================================== 00:10:50.960 Total : 16594.89 64.82 0.00 0.00 7708.46 2484.91 13707.95 00:10:50.960 0 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2739463 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2739463 ']' 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2739463 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2739463 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2739463' 00:10:50.960 killing process with pid 2739463 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2739463 00:10:50.960 Received shutdown signal, test time was about 10.000000 seconds 00:10:50.960 00:10:50.960 Latency(us) 00:10:50.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.960 =================================================================================================================== 00:10:50.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:50.960 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2739463 00:10:51.234 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:51.494 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:51.494 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:51.494 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:51.753 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:51.753 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:51.753 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:52.013 [2024-07-22 19:15:10.725948] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:52.013 request: 00:10:52.013 { 00:10:52.013 "uuid": "942424e5-0738-4556-8e2e-126a22cf204b", 00:10:52.013 "method": "bdev_lvol_get_lvstores", 00:10:52.013 "req_id": 1 00:10:52.013 } 00:10:52.013 Got JSON-RPC error response 00:10:52.013 response: 00:10:52.013 { 00:10:52.013 "code": -19, 00:10:52.013 "message": "No such device" 00:10:52.013 } 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:52.013 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:52.274 aio_bdev 00:10:52.274 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d8ec2bac-40ef-4ecb-904d-b93060ff0024 00:10:52.274 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=d8ec2bac-40ef-4ecb-904d-b93060ff0024 00:10:52.274 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:52.274 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:52.274 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:52.274 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:52.274 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:52.535 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d8ec2bac-40ef-4ecb-904d-b93060ff0024 -t 2000 00:10:52.535 [ 00:10:52.535 { 00:10:52.535 "name": "d8ec2bac-40ef-4ecb-904d-b93060ff0024", 00:10:52.535 "aliases": [ 00:10:52.535 "lvs/lvol" 00:10:52.535 ], 00:10:52.535 "product_name": "Logical Volume", 00:10:52.535 "block_size": 4096, 00:10:52.535 "num_blocks": 38912, 00:10:52.535 "uuid": "d8ec2bac-40ef-4ecb-904d-b93060ff0024", 00:10:52.535 "assigned_rate_limits": { 00:10:52.535 "rw_ios_per_sec": 0, 00:10:52.535 "rw_mbytes_per_sec": 0, 00:10:52.535 "r_mbytes_per_sec": 0, 00:10:52.535 "w_mbytes_per_sec": 0 00:10:52.535 }, 00:10:52.535 "claimed": false, 00:10:52.535 "zoned": false, 00:10:52.535 "supported_io_types": { 00:10:52.535 "read": true, 00:10:52.535 "write": true, 00:10:52.535 "unmap": true, 00:10:52.535 "flush": false, 00:10:52.535 "reset": true, 00:10:52.535 "nvme_admin": false, 00:10:52.535 "nvme_io": false, 00:10:52.535 "nvme_io_md": false, 00:10:52.535 "write_zeroes": true, 00:10:52.535 "zcopy": false, 00:10:52.535 "get_zone_info": false, 00:10:52.535 "zone_management": false, 00:10:52.535 "zone_append": false, 00:10:52.535 "compare": false, 00:10:52.535 "compare_and_write": false, 00:10:52.535 "abort": false, 00:10:52.535 "seek_hole": true, 00:10:52.535 "seek_data": true, 00:10:52.535 "copy": false, 00:10:52.535 "nvme_iov_md": false 00:10:52.535 }, 00:10:52.535 "driver_specific": { 00:10:52.535 "lvol": { 00:10:52.535 "lvol_store_uuid": "942424e5-0738-4556-8e2e-126a22cf204b", 00:10:52.535 "base_bdev": "aio_bdev", 00:10:52.535 "thin_provision": false, 00:10:52.535 "num_allocated_clusters": 38, 00:10:52.535 "snapshot": false, 00:10:52.535 "clone": false, 00:10:52.535 "esnap_clone": false 00:10:52.535 } 00:10:52.535 } 00:10:52.535 } 00:10:52.535 ] 00:10:52.535 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:52.535 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:52.535 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:52.796 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:52.796 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:52.796 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:53.056 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:53.056 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d8ec2bac-40ef-4ecb-904d-b93060ff0024 00:10:53.056 19:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 942424e5-0738-4556-8e2e-126a22cf204b 00:10:53.316 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:53.576 00:10:53.576 real 0m15.791s 00:10:53.576 user 0m15.369s 00:10:53.576 sys 0m1.365s 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 ************************************ 00:10:53.576 END TEST lvs_grow_clean 00:10:53.576 ************************************ 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 ************************************ 00:10:53.576 START TEST lvs_grow_dirty 00:10:53.576 ************************************ 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:53.576 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:53.577 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:53.577 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:53.577 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:53.577 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:53.577 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:53.577 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:53.577 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:53.837 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:53.837 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:53.837 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5435adf0-37af-4484-9920-d38c85543f54 00:10:53.837 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:10:53.837 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:54.097 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:54.097 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:54.097 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5435adf0-37af-4484-9920-d38c85543f54 lvol 150 00:10:54.097 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d8bd5e12-34e6-42c0-b537-1dc2d423b448 00:10:54.097 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:54.097 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:54.357 [2024-07-22 19:15:13.170469] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:54.357 [2024-07-22 19:15:13.170541] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:54.357 true 00:10:54.357 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:10:54.357 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:54.617 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:54.617 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:54.617 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d8bd5e12-34e6-42c0-b537-1dc2d423b448 00:10:54.878 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:54.878 [2024-07-22 19:15:13.776441] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.878 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2743306 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2743306 /var/tmp/bdevperf.sock 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2743306 ']' 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:55.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.139 19:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:55.139 [2024-07-22 19:15:14.021055] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:55.139 [2024-07-22 19:15:14.021169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743306 ] 00:10:55.139 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.400 [2024-07-22 19:15:14.143130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.400 [2024-07-22 19:15:14.279632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.973 19:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.973 19:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:55.973 19:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:56.234 Nvme0n1 00:10:56.235 19:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:56.235 [ 00:10:56.235 { 00:10:56.235 "name": "Nvme0n1", 00:10:56.235 "aliases": [ 00:10:56.235 "d8bd5e12-34e6-42c0-b537-1dc2d423b448" 00:10:56.235 ], 00:10:56.235 "product_name": "NVMe disk", 00:10:56.235 "block_size": 4096, 00:10:56.235 "num_blocks": 38912, 00:10:56.235 "uuid": "d8bd5e12-34e6-42c0-b537-1dc2d423b448", 00:10:56.235 "assigned_rate_limits": { 00:10:56.235 "rw_ios_per_sec": 0, 00:10:56.235 "rw_mbytes_per_sec": 0, 00:10:56.235 "r_mbytes_per_sec": 0, 00:10:56.235 "w_mbytes_per_sec": 0 00:10:56.235 }, 00:10:56.235 "claimed": false, 00:10:56.235 "zoned": false, 00:10:56.235 "supported_io_types": { 00:10:56.235 "read": true, 00:10:56.235 "write": true, 00:10:56.235 "unmap": true, 00:10:56.235 "flush": true, 00:10:56.235 "reset": true, 00:10:56.235 "nvme_admin": true, 00:10:56.235 "nvme_io": true, 00:10:56.235 "nvme_io_md": false, 00:10:56.235 "write_zeroes": true, 00:10:56.235 "zcopy": false, 00:10:56.235 "get_zone_info": false, 00:10:56.235 "zone_management": false, 00:10:56.235 "zone_append": false, 00:10:56.235 "compare": true, 00:10:56.235 "compare_and_write": true, 00:10:56.235 "abort": true, 00:10:56.235 "seek_hole": false, 00:10:56.235 "seek_data": false, 00:10:56.235 "copy": true, 00:10:56.235 "nvme_iov_md": false 00:10:56.235 }, 00:10:56.235 "memory_domains": [ 00:10:56.235 { 00:10:56.235 "dma_device_id": "system", 00:10:56.235 "dma_device_type": 1 00:10:56.235 } 00:10:56.235 ], 00:10:56.235 "driver_specific": { 00:10:56.235 "nvme": [ 00:10:56.235 { 00:10:56.235 "trid": { 00:10:56.235 "trtype": "TCP", 00:10:56.235 "adrfam": "IPv4", 00:10:56.235 "traddr": "10.0.0.2", 00:10:56.235 "trsvcid": "4420", 00:10:56.235 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:56.235 }, 00:10:56.235 "ctrlr_data": { 00:10:56.235 "cntlid": 1, 00:10:56.235 "vendor_id": "0x8086", 00:10:56.235 "model_number": "SPDK bdev Controller", 00:10:56.235 "serial_number": "SPDK0", 00:10:56.235 "firmware_revision": "24.09", 00:10:56.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:56.235 "oacs": { 00:10:56.235 "security": 0, 00:10:56.235 "format": 0, 00:10:56.235 "firmware": 0, 00:10:56.235 "ns_manage": 0 00:10:56.235 }, 00:10:56.235 "multi_ctrlr": true, 00:10:56.235 "ana_reporting": false 00:10:56.235 }, 00:10:56.235 "vs": { 00:10:56.235 "nvme_version": "1.3" 00:10:56.235 }, 00:10:56.235 "ns_data": { 00:10:56.235 "id": 1, 00:10:56.235 "can_share": true 00:10:56.235 } 00:10:56.235 } 00:10:56.235 ], 00:10:56.235 "mp_policy": "active_passive" 00:10:56.235 } 00:10:56.235 } 00:10:56.235 ] 00:10:56.496 19:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2743441 00:10:56.496 19:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:56.496 19:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:56.496 Running I/O for 10 seconds... 00:10:57.439 Latency(us) 00:10:57.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.439 Nvme0n1 : 1.00 16329.00 63.79 0.00 0.00 0.00 0.00 0.00 00:10:57.439 =================================================================================================================== 00:10:57.439 Total : 16329.00 63.79 0.00 0.00 0.00 0.00 0.00 00:10:57.439 00:10:58.384 19:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5435adf0-37af-4484-9920-d38c85543f54 00:10:58.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.384 Nvme0n1 : 2.00 16387.50 64.01 0.00 0.00 0.00 0.00 0.00 00:10:58.384 =================================================================================================================== 00:10:58.384 Total : 16387.50 64.01 0.00 0.00 0.00 0.00 0.00 00:10:58.384 00:10:58.644 true 00:10:58.644 19:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:10:58.644 19:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:58.644 19:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:58.644 19:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:58.644 19:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2743441 00:10:59.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.586 Nvme0n1 : 3.00 16435.00 64.20 0.00 0.00 0.00 0.00 0.00 00:10:59.586 =================================================================================================================== 00:10:59.586 Total : 16435.00 64.20 0.00 0.00 0.00 0.00 0.00 00:10:59.586 00:11:00.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.530 Nvme0n1 : 4.00 16481.25 64.38 0.00 0.00 0.00 0.00 0.00 00:11:00.530 =================================================================================================================== 00:11:00.530 Total : 16481.25 64.38 0.00 0.00 0.00 0.00 0.00 00:11:00.530 00:11:01.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.472 Nvme0n1 : 5.00 16486.40 64.40 0.00 0.00 0.00 0.00 0.00 00:11:01.472 =================================================================================================================== 00:11:01.472 Total : 16486.40 64.40 0.00 0.00 0.00 0.00 0.00 00:11:01.472 00:11:02.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.415 Nvme0n1 : 6.00 16511.67 64.50 0.00 0.00 0.00 0.00 0.00 00:11:02.415 =================================================================================================================== 00:11:02.415 Total : 16511.67 64.50 0.00 0.00 0.00 0.00 0.00 00:11:02.415 00:11:03.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.357 Nvme0n1 : 7.00 16529.71 64.57 0.00 0.00 0.00 0.00 0.00 00:11:03.357 =================================================================================================================== 00:11:03.357 Total : 16529.71 64.57 0.00 0.00 0.00 0.00 0.00 00:11:03.357 00:11:04.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.743 Nvme0n1 : 8.00 16542.88 64.62 0.00 0.00 0.00 0.00 0.00 00:11:04.743 =================================================================================================================== 00:11:04.743 Total : 16542.88 64.62 0.00 0.00 0.00 0.00 0.00 00:11:04.743 00:11:05.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.685 Nvme0n1 : 9.00 16553.33 64.66 0.00 0.00 0.00 0.00 0.00 00:11:05.685 =================================================================================================================== 00:11:05.685 Total : 16553.33 64.66 0.00 0.00 0.00 0.00 0.00 00:11:05.685 00:11:06.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.626 Nvme0n1 : 10.00 16568.20 64.72 0.00 0.00 0.00 0.00 0.00 00:11:06.626 =================================================================================================================== 00:11:06.626 Total : 16568.20 64.72 0.00 0.00 0.00 0.00 0.00 00:11:06.626 00:11:06.626 00:11:06.626 Latency(us) 00:11:06.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.626 Nvme0n1 : 10.00 16567.01 64.71 0.00 0.00 7721.57 4860.59 13653.33 00:11:06.626 =================================================================================================================== 00:11:06.626 Total : 16567.01 64.71 0.00 0.00 7721.57 4860.59 13653.33 00:11:06.626 0 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2743306 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2743306 ']' 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2743306 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2743306 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2743306' 00:11:06.626 killing process with pid 2743306 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2743306 00:11:06.626 Received shutdown signal, test time was about 10.000000 seconds 00:11:06.626 00:11:06.626 Latency(us) 00:11:06.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.626 =================================================================================================================== 00:11:06.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:06.626 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2743306 00:11:07.198 19:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:07.198 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2738941 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2738941 00:11:07.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2738941 Killed "${NVMF_APP[@]}" "$@" 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2745677 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2745677 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2745677 ']' 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.459 19:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:07.720 [2024-07-22 19:15:26.486884] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:07.720 [2024-07-22 19:15:26.486989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.720 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.720 [2024-07-22 19:15:26.614869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.981 [2024-07-22 19:15:26.793448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.981 [2024-07-22 19:15:26.793493] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.981 [2024-07-22 19:15:26.793506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.981 [2024-07-22 19:15:26.793515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.981 [2024-07-22 19:15:26.793527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.981 [2024-07-22 19:15:26.793562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.276 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.276 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:08.276 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.276 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:08.276 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:08.560 [2024-07-22 19:15:27.400805] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:08.560 [2024-07-22 19:15:27.400953] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:08.560 [2024-07-22 19:15:27.400996] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d8bd5e12-34e6-42c0-b537-1dc2d423b448 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=d8bd5e12-34e6-42c0-b537-1dc2d423b448 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:08.560 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:08.821 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d8bd5e12-34e6-42c0-b537-1dc2d423b448 -t 2000 00:11:08.821 [ 00:11:08.821 { 00:11:08.821 "name": "d8bd5e12-34e6-42c0-b537-1dc2d423b448", 00:11:08.821 "aliases": [ 00:11:08.821 "lvs/lvol" 00:11:08.821 ], 00:11:08.821 "product_name": "Logical Volume", 00:11:08.821 "block_size": 4096, 00:11:08.821 "num_blocks": 38912, 00:11:08.821 "uuid": "d8bd5e12-34e6-42c0-b537-1dc2d423b448", 00:11:08.821 "assigned_rate_limits": { 00:11:08.821 "rw_ios_per_sec": 0, 00:11:08.821 "rw_mbytes_per_sec": 0, 00:11:08.821 "r_mbytes_per_sec": 0, 00:11:08.821 "w_mbytes_per_sec": 0 00:11:08.821 }, 00:11:08.821 "claimed": false, 00:11:08.821 "zoned": false, 00:11:08.821 "supported_io_types": { 00:11:08.821 "read": true, 00:11:08.821 "write": true, 00:11:08.821 "unmap": true, 00:11:08.821 "flush": false, 00:11:08.821 "reset": true, 00:11:08.821 "nvme_admin": false, 00:11:08.821 "nvme_io": false, 00:11:08.821 "nvme_io_md": false, 00:11:08.821 "write_zeroes": true, 00:11:08.821 "zcopy": false, 00:11:08.821 "get_zone_info": false, 00:11:08.821 "zone_management": false, 00:11:08.821 "zone_append": false, 00:11:08.821 "compare": false, 00:11:08.821 "compare_and_write": false, 00:11:08.821 "abort": false, 00:11:08.821 "seek_hole": true, 00:11:08.821 "seek_data": true, 00:11:08.821 "copy": false, 00:11:08.821 "nvme_iov_md": false 00:11:08.821 }, 00:11:08.821 "driver_specific": { 00:11:08.821 "lvol": { 00:11:08.821 "lvol_store_uuid": "5435adf0-37af-4484-9920-d38c85543f54", 00:11:08.821 "base_bdev": "aio_bdev", 00:11:08.821 "thin_provision": false, 00:11:08.821 "num_allocated_clusters": 38, 00:11:08.821 "snapshot": false, 00:11:08.821 "clone": false, 00:11:08.821 "esnap_clone": false 00:11:08.821 } 00:11:08.821 } 00:11:08.821 } 00:11:08.821 ] 00:11:08.821 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:08.821 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:11:08.821 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:09.082 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:09.082 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:09.082 19:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:11:09.082 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:09.082 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:09.343 [2024-07-22 19:15:28.152441] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:09.343 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:11:09.604 request: 00:11:09.604 { 00:11:09.604 "uuid": "5435adf0-37af-4484-9920-d38c85543f54", 00:11:09.604 "method": "bdev_lvol_get_lvstores", 00:11:09.604 "req_id": 1 00:11:09.604 } 00:11:09.604 Got JSON-RPC error response 00:11:09.604 response: 00:11:09.604 { 00:11:09.604 "code": -19, 00:11:09.604 "message": "No such device" 00:11:09.604 } 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:09.604 aio_bdev 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d8bd5e12-34e6-42c0-b537-1dc2d423b448 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=d8bd5e12-34e6-42c0-b537-1dc2d423b448 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:09.604 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:09.865 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d8bd5e12-34e6-42c0-b537-1dc2d423b448 -t 2000 00:11:09.865 [ 00:11:09.865 { 00:11:09.865 "name": "d8bd5e12-34e6-42c0-b537-1dc2d423b448", 00:11:09.865 "aliases": [ 00:11:09.865 "lvs/lvol" 00:11:09.865 ], 00:11:09.865 "product_name": "Logical Volume", 00:11:09.865 "block_size": 4096, 00:11:09.865 "num_blocks": 38912, 00:11:09.865 "uuid": "d8bd5e12-34e6-42c0-b537-1dc2d423b448", 00:11:09.865 "assigned_rate_limits": { 00:11:09.865 "rw_ios_per_sec": 0, 00:11:09.865 "rw_mbytes_per_sec": 0, 00:11:09.865 "r_mbytes_per_sec": 0, 00:11:09.865 "w_mbytes_per_sec": 0 00:11:09.865 }, 00:11:09.865 "claimed": false, 00:11:09.865 "zoned": false, 00:11:09.865 "supported_io_types": { 00:11:09.865 "read": true, 00:11:09.865 "write": true, 00:11:09.865 "unmap": true, 00:11:09.865 "flush": false, 00:11:09.865 "reset": true, 00:11:09.865 "nvme_admin": false, 00:11:09.865 "nvme_io": false, 00:11:09.865 "nvme_io_md": false, 00:11:09.865 "write_zeroes": true, 00:11:09.865 "zcopy": false, 00:11:09.865 "get_zone_info": false, 00:11:09.865 "zone_management": false, 00:11:09.865 "zone_append": false, 00:11:09.865 "compare": false, 00:11:09.865 "compare_and_write": false, 00:11:09.865 "abort": false, 00:11:09.865 "seek_hole": true, 00:11:09.865 "seek_data": true, 00:11:09.865 "copy": false, 00:11:09.865 "nvme_iov_md": false 00:11:09.865 }, 00:11:09.865 "driver_specific": { 00:11:09.865 "lvol": { 00:11:09.865 "lvol_store_uuid": "5435adf0-37af-4484-9920-d38c85543f54", 00:11:09.865 "base_bdev": "aio_bdev", 00:11:09.865 "thin_provision": false, 00:11:09.865 "num_allocated_clusters": 38, 00:11:09.865 "snapshot": false, 00:11:09.865 "clone": false, 00:11:09.865 "esnap_clone": false 00:11:09.865 } 00:11:09.865 } 00:11:09.865 } 00:11:09.865 ] 00:11:09.866 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:09.866 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:11:09.866 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:10.126 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:10.126 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5435adf0-37af-4484-9920-d38c85543f54 00:11:10.126 19:15:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:10.387 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:10.387 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d8bd5e12-34e6-42c0-b537-1dc2d423b448 00:11:10.387 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5435adf0-37af-4484-9920-d38c85543f54 00:11:10.648 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:10.648 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:10.909 00:11:10.909 real 0m17.244s 00:11:10.909 user 0m45.486s 00:11:10.909 sys 0m2.886s 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:10.909 ************************************ 00:11:10.909 END TEST lvs_grow_dirty 00:11:10.909 ************************************ 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:10.909 nvmf_trace.0 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.909 rmmod nvme_tcp 00:11:10.909 rmmod nvme_fabrics 00:11:10.909 rmmod nvme_keyring 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2745677 ']' 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2745677 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2745677 ']' 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2745677 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2745677 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2745677' 00:11:10.909 killing process with pid 2745677 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2745677 00:11:10.909 19:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2745677 00:11:11.851 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:11.851 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:11.851 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:11.851 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:11.851 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:11.851 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.851 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.851 19:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:14.400 00:11:14.400 real 0m44.561s 00:11:14.400 user 1m7.386s 00:11:14.400 sys 0m10.031s 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 ************************************ 00:11:14.400 END TEST nvmf_lvs_grow 00:11:14.400 ************************************ 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 ************************************ 00:11:14.400 START TEST nvmf_bdev_io_wait 00:11:14.400 ************************************ 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:14.400 * Looking for test storage... 00:11:14.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.400 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.401 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:11:14.401 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:20.990 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:20.990 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:20.990 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:20.990 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.990 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.991 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.253 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.253 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:11:21.253 00:11:21.253 --- 10.0.0.2 ping statistics --- 00:11:21.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.253 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:11:21.253 00:11:21.253 --- 10.0.0.1 ping statistics --- 00:11:21.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.253 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2750745 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2750745 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2750745 ']' 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.253 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.515 [2024-07-22 19:15:40.231856] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:21.515 [2024-07-22 19:15:40.231980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.515 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.515 [2024-07-22 19:15:40.365338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.776 [2024-07-22 19:15:40.550039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.776 [2024-07-22 19:15:40.550086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.776 [2024-07-22 19:15:40.550099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.776 [2024-07-22 19:15:40.550109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.777 [2024-07-22 19:15:40.550120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.777 [2024-07-22 19:15:40.550280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.777 [2024-07-22 19:15:40.550325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.777 [2024-07-22 19:15:40.550455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.777 [2024-07-22 19:15:40.550482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.038 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.038 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:11:22.038 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.038 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.038 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.299 [2024-07-22 19:15:41.200683] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.299 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.559 Malloc0 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.559 [2024-07-22 19:15:41.305930] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2751044 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2751046 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:22.559 { 00:11:22.559 "params": { 00:11:22.559 "name": "Nvme$subsystem", 00:11:22.559 "trtype": "$TEST_TRANSPORT", 00:11:22.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.559 "adrfam": "ipv4", 00:11:22.559 "trsvcid": "$NVMF_PORT", 00:11:22.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.559 "hdgst": ${hdgst:-false}, 00:11:22.559 "ddgst": ${ddgst:-false} 00:11:22.559 }, 00:11:22.559 "method": "bdev_nvme_attach_controller" 00:11:22.559 } 00:11:22.559 EOF 00:11:22.559 )") 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2751049 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2751053 00:11:22.559 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:22.559 { 00:11:22.559 "params": { 00:11:22.559 "name": "Nvme$subsystem", 00:11:22.559 "trtype": "$TEST_TRANSPORT", 00:11:22.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.559 "adrfam": "ipv4", 00:11:22.559 "trsvcid": "$NVMF_PORT", 00:11:22.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.559 "hdgst": ${hdgst:-false}, 00:11:22.559 "ddgst": ${ddgst:-false} 00:11:22.559 }, 00:11:22.559 "method": "bdev_nvme_attach_controller" 00:11:22.559 } 00:11:22.559 EOF 00:11:22.559 )") 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:22.560 { 00:11:22.560 "params": { 00:11:22.560 "name": "Nvme$subsystem", 00:11:22.560 "trtype": "$TEST_TRANSPORT", 00:11:22.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.560 "adrfam": "ipv4", 00:11:22.560 "trsvcid": "$NVMF_PORT", 00:11:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.560 "hdgst": ${hdgst:-false}, 00:11:22.560 "ddgst": ${ddgst:-false} 00:11:22.560 }, 00:11:22.560 "method": "bdev_nvme_attach_controller" 00:11:22.560 } 00:11:22.560 EOF 00:11:22.560 )") 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:22.560 { 00:11:22.560 "params": { 00:11:22.560 "name": "Nvme$subsystem", 00:11:22.560 "trtype": "$TEST_TRANSPORT", 00:11:22.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.560 "adrfam": "ipv4", 00:11:22.560 "trsvcid": "$NVMF_PORT", 00:11:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.560 "hdgst": ${hdgst:-false}, 00:11:22.560 "ddgst": ${ddgst:-false} 00:11:22.560 }, 00:11:22.560 "method": "bdev_nvme_attach_controller" 00:11:22.560 } 00:11:22.560 EOF 00:11:22.560 )") 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2751044 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:22.560 "params": { 00:11:22.560 "name": "Nvme1", 00:11:22.560 "trtype": "tcp", 00:11:22.560 "traddr": "10.0.0.2", 00:11:22.560 "adrfam": "ipv4", 00:11:22.560 "trsvcid": "4420", 00:11:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.560 "hdgst": false, 00:11:22.560 "ddgst": false 00:11:22.560 }, 00:11:22.560 "method": "bdev_nvme_attach_controller" 00:11:22.560 }' 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:22.560 "params": { 00:11:22.560 "name": "Nvme1", 00:11:22.560 "trtype": "tcp", 00:11:22.560 "traddr": "10.0.0.2", 00:11:22.560 "adrfam": "ipv4", 00:11:22.560 "trsvcid": "4420", 00:11:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.560 "hdgst": false, 00:11:22.560 "ddgst": false 00:11:22.560 }, 00:11:22.560 "method": "bdev_nvme_attach_controller" 00:11:22.560 }' 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:22.560 "params": { 00:11:22.560 "name": "Nvme1", 00:11:22.560 "trtype": "tcp", 00:11:22.560 "traddr": "10.0.0.2", 00:11:22.560 "adrfam": "ipv4", 00:11:22.560 "trsvcid": "4420", 00:11:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.560 "hdgst": false, 00:11:22.560 "ddgst": false 00:11:22.560 }, 00:11:22.560 "method": "bdev_nvme_attach_controller" 00:11:22.560 }' 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:22.560 19:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:22.560 "params": { 00:11:22.560 "name": "Nvme1", 00:11:22.560 "trtype": "tcp", 00:11:22.560 "traddr": "10.0.0.2", 00:11:22.560 "adrfam": "ipv4", 00:11:22.560 "trsvcid": "4420", 00:11:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.560 "hdgst": false, 00:11:22.560 "ddgst": false 00:11:22.560 }, 00:11:22.560 "method": "bdev_nvme_attach_controller" 00:11:22.560 }' 00:11:22.560 [2024-07-22 19:15:41.385316] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:22.560 [2024-07-22 19:15:41.385429] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:22.560 [2024-07-22 19:15:41.387878] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:22.560 [2024-07-22 19:15:41.387974] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:22.560 [2024-07-22 19:15:41.389390] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:22.560 [2024-07-22 19:15:41.389386] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:22.560 [2024-07-22 19:15:41.389492] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:22.560 [2024-07-22 19:15:41.389503] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:22.560 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.821 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.821 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.821 [2024-07-22 19:15:41.585073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.821 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.821 [2024-07-22 19:15:41.646486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.821 [2024-07-22 19:15:41.696868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.821 [2024-07-22 19:15:41.746547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.821 [2024-07-22 19:15:41.758332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:23.081 [2024-07-22 19:15:41.824286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:23.081 [2024-07-22 19:15:41.867878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:23.081 [2024-07-22 19:15:41.918733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:23.344 Running I/O for 1 seconds... 00:11:23.344 Running I/O for 1 seconds... 00:11:23.344 Running I/O for 1 seconds... 00:11:23.344 Running I/O for 1 seconds... 00:11:24.319 00:11:24.319 Latency(us) 00:11:24.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.319 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:24.319 Nvme1n1 : 1.00 170013.01 664.11 0.00 0.00 749.87 308.91 907.95 00:11:24.319 =================================================================================================================== 00:11:24.319 Total : 170013.01 664.11 0.00 0.00 749.87 308.91 907.95 00:11:24.319 00:11:24.319 Latency(us) 00:11:24.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.319 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:24.319 Nvme1n1 : 1.01 7147.80 27.92 0.00 0.00 17734.56 4560.21 25231.36 00:11:24.319 =================================================================================================================== 00:11:24.319 Total : 7147.80 27.92 0.00 0.00 17734.56 4560.21 25231.36 00:11:24.319 00:11:24.319 Latency(us) 00:11:24.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.319 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:24.319 Nvme1n1 : 1.01 16251.27 63.48 0.00 0.00 7848.23 6089.39 15619.41 00:11:24.319 =================================================================================================================== 00:11:24.319 Total : 16251.27 63.48 0.00 0.00 7848.23 6089.39 15619.41 00:11:24.578 00:11:24.578 Latency(us) 00:11:24.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.578 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:24.578 Nvme1n1 : 1.00 7253.35 28.33 0.00 0.00 17602.23 4314.45 35607.89 00:11:24.578 =================================================================================================================== 00:11:24.578 Total : 7253.35 28.33 0.00 0.00 17602.23 4314.45 35607.89 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2751046 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2751049 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2751053 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.148 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.148 rmmod nvme_tcp 00:11:25.148 rmmod nvme_fabrics 00:11:25.409 rmmod nvme_keyring 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2750745 ']' 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2750745 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2750745 ']' 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2750745 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2750745 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2750745' 00:11:25.409 killing process with pid 2750745 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2750745 00:11:25.409 19:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2750745 00:11:26.350 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.350 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.350 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.350 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.350 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.350 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.350 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.350 19:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.263 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.263 00:11:28.263 real 0m14.224s 00:11:28.263 user 0m27.675s 00:11:28.263 sys 0m7.104s 00:11:28.263 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.263 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:28.263 ************************************ 00:11:28.263 END TEST nvmf_bdev_io_wait 00:11:28.263 ************************************ 00:11:28.263 19:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:28.263 19:15:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:28.263 19:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:28.263 19:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.263 19:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:28.263 ************************************ 00:11:28.263 START TEST nvmf_queue_depth 00:11:28.263 ************************************ 00:11:28.263 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:28.524 * Looking for test storage... 00:11:28.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:28.524 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.525 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.113 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.113 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.113 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.113 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.113 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.113 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:35.114 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:35.114 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:35.114 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:35.114 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.114 19:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.114 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.114 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.114 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:11:35.376 00:11:35.376 --- 10.0.0.2 ping statistics --- 00:11:35.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.376 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.456 ms 00:11:35.376 00:11:35.376 --- 10.0.0.1 ping statistics --- 00:11:35.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.376 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2755803 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2755803 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2755803 ']' 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.376 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:35.637 [2024-07-22 19:15:54.341282] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:35.637 [2024-07-22 19:15:54.341408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.637 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.637 [2024-07-22 19:15:54.493102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.899 [2024-07-22 19:15:54.723425] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.899 [2024-07-22 19:15:54.723489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.899 [2024-07-22 19:15:54.723503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.899 [2024-07-22 19:15:54.723514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.899 [2024-07-22 19:15:54.723525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.899 [2024-07-22 19:15:54.723570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.161 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.161 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:36.161 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.161 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.161 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 [2024-07-22 19:15:55.136533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 Malloc0 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 [2024-07-22 19:15:55.229573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2756116 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2756116 /var/tmp/bdevperf.sock 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2756116 ']' 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:36.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 19:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:36.423 [2024-07-22 19:15:55.319135] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:36.423 [2024-07-22 19:15:55.319279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756116 ] 00:11:36.728 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.728 [2024-07-22 19:15:55.442614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.728 [2024-07-22 19:15:55.621883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.329 19:15:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.329 19:15:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:37.329 19:15:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:37.329 19:15:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.329 19:15:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:37.329 NVMe0n1 00:11:37.329 19:15:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.329 19:15:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:37.329 Running I/O for 10 seconds... 00:11:49.563 00:11:49.563 Latency(us) 00:11:49.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.563 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:49.563 Verification LBA range: start 0x0 length 0x4000 00:11:49.563 NVMe0n1 : 10.06 10218.03 39.91 0.00 0.00 99767.29 16930.13 79517.01 00:11:49.563 =================================================================================================================== 00:11:49.563 Total : 10218.03 39.91 0.00 0.00 99767.29 16930.13 79517.01 00:11:49.563 0 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2756116 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2756116 ']' 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2756116 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2756116 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2756116' 00:11:49.563 killing process with pid 2756116 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2756116 00:11:49.563 Received shutdown signal, test time was about 10.000000 seconds 00:11:49.563 00:11:49.563 Latency(us) 00:11:49.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.563 =================================================================================================================== 00:11:49.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:49.563 19:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2756116 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.564 rmmod nvme_tcp 00:11:49.564 rmmod nvme_fabrics 00:11:49.564 rmmod nvme_keyring 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2755803 ']' 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2755803 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2755803 ']' 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2755803 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2755803 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2755803' 00:11:49.564 killing process with pid 2755803 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2755803 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2755803 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.564 19:16:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.478 19:16:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:51.478 00:11:51.478 real 0m22.827s 00:11:51.478 user 0m26.845s 00:11:51.478 sys 0m6.585s 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:51.478 ************************************ 00:11:51.478 END TEST nvmf_queue_depth 00:11:51.478 ************************************ 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:51.478 ************************************ 00:11:51.478 START TEST nvmf_target_multipath 00:11:51.478 ************************************ 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:51.478 * Looking for test storage... 00:11:51.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.478 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:11:51.479 19:16:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:59.625 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:59.625 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:59.625 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:59.625 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.625 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:11:59.626 00:11:59.626 --- 10.0.0.2 ping statistics --- 00:11:59.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.626 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:11:59.626 00:11:59.626 --- 10.0.0.1 ping statistics --- 00:11:59.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.626 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:59.626 only one NIC for nvmf test 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:59.626 rmmod nvme_tcp 00:11:59.626 rmmod nvme_fabrics 00:11:59.626 rmmod nvme_keyring 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.626 19:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.010 00:12:01.010 real 0m9.556s 00:12:01.010 user 0m2.122s 00:12:01.010 sys 0m5.334s 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:01.010 ************************************ 00:12:01.010 END TEST nvmf_target_multipath 00:12:01.010 ************************************ 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:01.010 ************************************ 00:12:01.010 START TEST nvmf_zcopy 00:12:01.010 ************************************ 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:01.010 * Looking for test storage... 00:12:01.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.010 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.011 19:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.600 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:07.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:07.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:07.862 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:07.862 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.862 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:12:08.124 00:12:08.124 --- 10.0.0.2 ping statistics --- 00:12:08.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.124 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:12:08.124 00:12:08.124 --- 10.0.0.1 ping statistics --- 00:12:08.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.124 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2766836 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2766836 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2766836 ']' 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.124 19:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.124 [2024-07-22 19:16:26.994355] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:08.124 [2024-07-22 19:16:26.994477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.124 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.385 [2024-07-22 19:16:27.143554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.646 [2024-07-22 19:16:27.366418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.646 [2024-07-22 19:16:27.366486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.646 [2024-07-22 19:16:27.366500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.646 [2024-07-22 19:16:27.366510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.646 [2024-07-22 19:16:27.366523] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.646 [2024-07-22 19:16:27.366560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.907 [2024-07-22 19:16:27.789694] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.907 [2024-07-22 19:16:27.813982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.907 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:09.168 malloc0 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:09.168 { 00:12:09.168 "params": { 00:12:09.168 "name": "Nvme$subsystem", 00:12:09.168 "trtype": "$TEST_TRANSPORT", 00:12:09.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.168 "adrfam": "ipv4", 00:12:09.168 "trsvcid": "$NVMF_PORT", 00:12:09.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.168 "hdgst": ${hdgst:-false}, 00:12:09.168 "ddgst": ${ddgst:-false} 00:12:09.168 }, 00:12:09.168 "method": "bdev_nvme_attach_controller" 00:12:09.168 } 00:12:09.168 EOF 00:12:09.168 )") 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:09.168 19:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:09.168 "params": { 00:12:09.168 "name": "Nvme1", 00:12:09.168 "trtype": "tcp", 00:12:09.168 "traddr": "10.0.0.2", 00:12:09.168 "adrfam": "ipv4", 00:12:09.168 "trsvcid": "4420", 00:12:09.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.168 "hdgst": false, 00:12:09.168 "ddgst": false 00:12:09.168 }, 00:12:09.168 "method": "bdev_nvme_attach_controller" 00:12:09.168 }' 00:12:09.168 [2024-07-22 19:16:27.971076] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:09.168 [2024-07-22 19:16:27.971197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2766949 ] 00:12:09.168 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.169 [2024-07-22 19:16:28.096620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.429 [2024-07-22 19:16:28.276444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.999 Running I/O for 10 seconds... 00:12:20.000 00:12:20.000 Latency(us) 00:12:20.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.000 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:20.000 Verification LBA range: start 0x0 length 0x1000 00:12:20.000 Nvme1n1 : 10.01 8329.70 65.08 0.00 0.00 15310.27 1774.93 28398.93 00:12:20.000 =================================================================================================================== 00:12:20.000 Total : 8329.70 65.08 0.00 0.00 15310.27 1774.93 28398.93 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2769204 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.571 { 00:12:20.571 "params": { 00:12:20.571 "name": "Nvme$subsystem", 00:12:20.571 "trtype": "$TEST_TRANSPORT", 00:12:20.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.571 "adrfam": "ipv4", 00:12:20.571 "trsvcid": "$NVMF_PORT", 00:12:20.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.571 "hdgst": ${hdgst:-false}, 00:12:20.571 "ddgst": ${ddgst:-false} 00:12:20.571 }, 00:12:20.571 "method": "bdev_nvme_attach_controller" 00:12:20.571 } 00:12:20.571 EOF 00:12:20.571 )") 00:12:20.571 [2024-07-22 19:16:39.453675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.571 [2024-07-22 19:16:39.453712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:20.571 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.571 "params": { 00:12:20.571 "name": "Nvme1", 00:12:20.571 "trtype": "tcp", 00:12:20.571 "traddr": "10.0.0.2", 00:12:20.571 "adrfam": "ipv4", 00:12:20.571 "trsvcid": "4420", 00:12:20.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.571 "hdgst": false, 00:12:20.571 "ddgst": false 00:12:20.571 }, 00:12:20.571 "method": "bdev_nvme_attach_controller" 00:12:20.571 }' 00:12:20.571 [2024-07-22 19:16:39.465666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.571 [2024-07-22 19:16:39.465685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.571 [2024-07-22 19:16:39.477680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.571 [2024-07-22 19:16:39.477697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.571 [2024-07-22 19:16:39.489706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.571 [2024-07-22 19:16:39.489724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.571 [2024-07-22 19:16:39.501729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.571 [2024-07-22 19:16:39.501746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.571 [2024-07-22 19:16:39.513768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.571 [2024-07-22 19:16:39.513785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.832 [2024-07-22 19:16:39.525799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.525816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.532930] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:20.833 [2024-07-22 19:16:39.533028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2769204 ] 00:12:20.833 [2024-07-22 19:16:39.537821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.537837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.549865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.549882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.561883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.561900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.573927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.573944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.585959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.585978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.833 [2024-07-22 19:16:39.597976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.597992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.610025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.610041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.622046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.622062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.634067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.634082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.642694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.833 [2024-07-22 19:16:39.646106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.646121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.658141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.658157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.670173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.670189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.682208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.682224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.694226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.694242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.706270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.706286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.718294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.718311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.730327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.730343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.742363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.742380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.754392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.754408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.766421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.766438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.833 [2024-07-22 19:16:39.778451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.833 [2024-07-22 19:16:39.778468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.093 [2024-07-22 19:16:39.790473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.093 [2024-07-22 19:16:39.790490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.802516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.802536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.814547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.814563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.820958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.094 [2024-07-22 19:16:39.826572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.826588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.838613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.838630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.850631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.850646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.862671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.862687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.874701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.874717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.886731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.886748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.898773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.898790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.910797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.910813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.922822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.922839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.934857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.934874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.946882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.946898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.958923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.958939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.970954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.970970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.982983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.983000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:39.995023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:39.995039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:40.007064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:40.007082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:40.019089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:40.019106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:40.031119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:40.031135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.094 [2024-07-22 19:16:40.043153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.094 [2024-07-22 19:16:40.043170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.055175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.055193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.067249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.067267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.079248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.079265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.087277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.087294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.095292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.095308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.107317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.107333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.119358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.119374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.131394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.131411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.143428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.143444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.155455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.155471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.354 [2024-07-22 19:16:40.167476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.354 [2024-07-22 19:16:40.167497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.179532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.179548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.191557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.191573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.203573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.203589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.215619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.215635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.227644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.227661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.239681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.239697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.251715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.251731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.263741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.263757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.275777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.275794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 [2024-07-22 19:16:40.287819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.287837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.355 Running I/O for 5 seconds... 00:12:21.355 [2024-07-22 19:16:40.304401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.355 [2024-07-22 19:16:40.304421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.319343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.319363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.333267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.333288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.347344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.347364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.362936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.362956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.376432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.376451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.390471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.390491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.402167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.402186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.416294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.416314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.430466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.430485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.443946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.443971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.458097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.458115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.473559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.473578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.487462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.487482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.500785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.500804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.514314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.514332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.527962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.527981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.541446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.541465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.555893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.555912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.567382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.567401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.649 [2024-07-22 19:16:40.581185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.649 [2024-07-22 19:16:40.581209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.595030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.595049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.608586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.608605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.622503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.622521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.633950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.633969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.647652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.647671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.661635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.661654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.675447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.675466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.689188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.689212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.702976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.702994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.716634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.716652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.730354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.730372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.744172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.930 [2024-07-22 19:16:40.744191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.930 [2024-07-22 19:16:40.759754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.759778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.931 [2024-07-22 19:16:40.773260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.773279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.931 [2024-07-22 19:16:40.787510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.787529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.931 [2024-07-22 19:16:40.799164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.799182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.931 [2024-07-22 19:16:40.812928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.812948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.931 [2024-07-22 19:16:40.826359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.826378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.931 [2024-07-22 19:16:40.840556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.840574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.931 [2024-07-22 19:16:40.855843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.855862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.931 [2024-07-22 19:16:40.869629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.869647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.931 [2024-07-22 19:16:40.881008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.931 [2024-07-22 19:16:40.881027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.192 [2024-07-22 19:16:40.894670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.192 [2024-07-22 19:16:40.894689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.192 [2024-07-22 19:16:40.908878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:40.908896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:40.924663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:40.924682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:40.938164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:40.938183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:40.951829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:40.951848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:40.966000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:40.966019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:40.977367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:40.977386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:40.991727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:40.991745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.005301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.005319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.018621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.018644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.032689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.032708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.046172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.046191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.059934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.059953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.073872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.073890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.087641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.087659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.101741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.101760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.115248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.115266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.128776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.128795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.193 [2024-07-22 19:16:41.142354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.193 [2024-07-22 19:16:41.142372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.155633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.155651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.169370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.169389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.183245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.183263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.197195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.197219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.212860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.212878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.226849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.226867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.239944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.239964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.254119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.254138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.267738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.267757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.281875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.281897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.295677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.295695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.309447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.309465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.323383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.323407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.454 [2024-07-22 19:16:41.334108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.454 [2024-07-22 19:16:41.334127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.455 [2024-07-22 19:16:41.348434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.455 [2024-07-22 19:16:41.348453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.455 [2024-07-22 19:16:41.362668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.455 [2024-07-22 19:16:41.362686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.455 [2024-07-22 19:16:41.376130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.455 [2024-07-22 19:16:41.376148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.455 [2024-07-22 19:16:41.389343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.455 [2024-07-22 19:16:41.389361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.455 [2024-07-22 19:16:41.403217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.455 [2024-07-22 19:16:41.403236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.415051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.415070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.428924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.428943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.443503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.443523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.458801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.458820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.472866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.472886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.484241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.484260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.498239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.498258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.512174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.512193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.523461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.523481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.537720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.537743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.551335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.716 [2024-07-22 19:16:41.551354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.716 [2024-07-22 19:16:41.565123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.717 [2024-07-22 19:16:41.565142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.717 [2024-07-22 19:16:41.578691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.717 [2024-07-22 19:16:41.578710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.717 [2024-07-22 19:16:41.591702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.717 [2024-07-22 19:16:41.591721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.717 [2024-07-22 19:16:41.605484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.717 [2024-07-22 19:16:41.605504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.717 [2024-07-22 19:16:41.620005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.717 [2024-07-22 19:16:41.620024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.717 [2024-07-22 19:16:41.635440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.717 [2024-07-22 19:16:41.635460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.717 [2024-07-22 19:16:41.649338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.717 [2024-07-22 19:16:41.649357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.717 [2024-07-22 19:16:41.663700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.717 [2024-07-22 19:16:41.663719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.675258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.675278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.689214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.689232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.702780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.702799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.716610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.716629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.729937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.729956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.743751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.743770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.757563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.757582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.771264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.771282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.784799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.784817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.798543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.798562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.812865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.812884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.828511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.828531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.842450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.842469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.856124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.856143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.869981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.870000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.881688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.881706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.895256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.895275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.908553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.908572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.980 [2024-07-22 19:16:41.921989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.980 [2024-07-22 19:16:41.922009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.241 [2024-07-22 19:16:41.935910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.241 [2024-07-22 19:16:41.935929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.241 [2024-07-22 19:16:41.948140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.241 [2024-07-22 19:16:41.948159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.241 [2024-07-22 19:16:41.962078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.241 [2024-07-22 19:16:41.962097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.241 [2024-07-22 19:16:41.976019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.241 [2024-07-22 19:16:41.976038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.241 [2024-07-22 19:16:41.987488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.241 [2024-07-22 19:16:41.987506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.241 [2024-07-22 19:16:42.002079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.241 [2024-07-22 19:16:42.002098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.015968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.015986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.029973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.029991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.045401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.045419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.059165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.059184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.072713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.072732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.086872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.086891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.098181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.098206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.111895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.111914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.124762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.124781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.138332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.138350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.151981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.152000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.166253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.166272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.242 [2024-07-22 19:16:42.182010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.242 [2024-07-22 19:16:42.182029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.196010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.196034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.209591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.209609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.223264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.223283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.236921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.236940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.250664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.250682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.264169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.264187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.277915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.277934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.291705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.291724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.305543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.305562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.319130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.319149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.332359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.332378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.345971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.345989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.359574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.359593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.373385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.373403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.387379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.387398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.401113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.401132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.414802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.414820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.428423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.428441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.442101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.442119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.503 [2024-07-22 19:16:42.453314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.503 [2024-07-22 19:16:42.453333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.467284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.467303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.480932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.480950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.494351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.494370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.508124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.508143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.521861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.521880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.535313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.535332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.548735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.548754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.562445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.562463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.576331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.576351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.589935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.589954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.603387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.603405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.616874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.616892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.630717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.630736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.644675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.644693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.660094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.660113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.673583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.673601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.686904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.686923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.700516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.700534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.764 [2024-07-22 19:16:42.714135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.764 [2024-07-22 19:16:42.714154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.727963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.727982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.741409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.741428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.755108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.755127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.768622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.768641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.782100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.782119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.795813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.795831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.809437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.809455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.823449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.823472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.835433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.835452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.849567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.849586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.863272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.863291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.874473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.874492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.888587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.888606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.902227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.902246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.915207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.915225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.928953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.928971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.942544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.942562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.956376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.956394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.026 [2024-07-22 19:16:42.970155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.026 [2024-07-22 19:16:42.970174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.287 [2024-07-22 19:16:42.983761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.287 [2024-07-22 19:16:42.983779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.287 [2024-07-22 19:16:42.997086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.287 [2024-07-22 19:16:42.997105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.287 [2024-07-22 19:16:43.011230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.287 [2024-07-22 19:16:43.011249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.287 [2024-07-22 19:16:43.024591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.024610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.038174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.038192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.051777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.051795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.065456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.065480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.079329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.079351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.090495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.090514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.104298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.104317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.117795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.117814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.131683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.131702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.145224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.145243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.159207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.159226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.172837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.172856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.186596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.186615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.199934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.199954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.213882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.213900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.288 [2024-07-22 19:16:43.229278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.288 [2024-07-22 19:16:43.229297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.243001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.243020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.256725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.256744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.270912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.270931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.283415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.283435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.297433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.297452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.309493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.309512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.323858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.323877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.335844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.335867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.349720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.349739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.362961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.362979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.376742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.376761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.390269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.390288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.548 [2024-07-22 19:16:43.404567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.548 [2024-07-22 19:16:43.404586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.549 [2024-07-22 19:16:43.420040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.549 [2024-07-22 19:16:43.420058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.549 [2024-07-22 19:16:43.434064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.549 [2024-07-22 19:16:43.434083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.549 [2024-07-22 19:16:43.448174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.549 [2024-07-22 19:16:43.448193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.549 [2024-07-22 19:16:43.460107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.549 [2024-07-22 19:16:43.460126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.549 [2024-07-22 19:16:43.473825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.549 [2024-07-22 19:16:43.473844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.549 [2024-07-22 19:16:43.487888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.549 [2024-07-22 19:16:43.487906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.503669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.503689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.517845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.517864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.531561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.531580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.545461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.545480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.561055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.561076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.574749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.574769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.587799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.587819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.601448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.601471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.615485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.615504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.628138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.628157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.642109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.642128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.655591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.655610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.669476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.669496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.681353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.681373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.694976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.694996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.708721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.708740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.721999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.722018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.735195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.735220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.749114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.749133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.809 [2024-07-22 19:16:43.760548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.809 [2024-07-22 19:16:43.760567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.774943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.774962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.788403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.788421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.802139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.802157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.815943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.815962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.829604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.829623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.843146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.843164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.856930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.856949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.870315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.870333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.884188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.884213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.898472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.898491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.914319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.914338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.928445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.928464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.941758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.941782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.955670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.955689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.969500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.969518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.983262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.983280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.069 [2024-07-22 19:16:43.995532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.069 [2024-07-22 19:16:43.995551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.070 [2024-07-22 19:16:44.009225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.070 [2024-07-22 19:16:44.009245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.070 [2024-07-22 19:16:44.023210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.070 [2024-07-22 19:16:44.023228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.034350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.034369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.048580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.048598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.062532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.062550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.073690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.073709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.087940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.087959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.101800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.101819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.116257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.116276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.127940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.127958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.141893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.141911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.155321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.155339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.169622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.169640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.184975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.184993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.198402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.198421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.211895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.211914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.224978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.224997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.239166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.239184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.254274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.254292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.330 [2024-07-22 19:16:44.268532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.330 [2024-07-22 19:16:44.268550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.284685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.591 [2024-07-22 19:16:44.284704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.298497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.591 [2024-07-22 19:16:44.298515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.312111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.591 [2024-07-22 19:16:44.312129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.325750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.591 [2024-07-22 19:16:44.325769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.340074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.591 [2024-07-22 19:16:44.340092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.352150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.591 [2024-07-22 19:16:44.352169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.365788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.591 [2024-07-22 19:16:44.365806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.379951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.591 [2024-07-22 19:16:44.379969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.395809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.591 [2024-07-22 19:16:44.395827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.591 [2024-07-22 19:16:44.409321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.409339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.423282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.423300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.435790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.435809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.449642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.449661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.463665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.463684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.477136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.477154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.490752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.490772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.504090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.504109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.517832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.517851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.531658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.531676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.592 [2024-07-22 19:16:44.545147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.592 [2024-07-22 19:16:44.545165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.559099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.559117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.572673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.572691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.585914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.585933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.599717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.599735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.613124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.613143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.626606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.626624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.639616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.639635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.653384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.653402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.667066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.667084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.680904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.680922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.693943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.693961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.705059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.705077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.719249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.719267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.733290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.733309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.745329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.745348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.758999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.759018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.772768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.772787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.786553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.786571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.853 [2024-07-22 19:16:44.800285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.853 [2024-07-22 19:16:44.800304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.114 [2024-07-22 19:16:44.813904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.114 [2024-07-22 19:16:44.813929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.114 [2024-07-22 19:16:44.827838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.114 [2024-07-22 19:16:44.827857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.114 [2024-07-22 19:16:44.841341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.114 [2024-07-22 19:16:44.841360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.855103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.855121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.868863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.868882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.882385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.882408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.896338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.896358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.908185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.908209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.922654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.922673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.936295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.936315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.950136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.950155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.961573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.961591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.976211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.976231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:44.987320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:44.987338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:45.001122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:45.001141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:45.014749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:45.014769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:45.028590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:45.028609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:45.042581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:45.042601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.115 [2024-07-22 19:16:45.054910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.115 [2024-07-22 19:16:45.054929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.068727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.068747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.082548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.082566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.096155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.096174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.110513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.110532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.121467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.121486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.135253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.135276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.148782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.148801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.162579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.162598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.176195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.176220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.189684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.189703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.203407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.203426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.217461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.217480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.229021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.229039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.242860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.242880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.256685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.256705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.268502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.268521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.282857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.282876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.297591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.297609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 [2024-07-22 19:16:45.309833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.309851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.376 00:12:26.376 Latency(us) 00:12:26.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.376 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:26.376 Nvme1n1 : 5.01 17428.39 136.16 0.00 0.00 7335.92 3208.53 17367.04 00:12:26.376 =================================================================================================================== 00:12:26.376 Total : 17428.39 136.16 0.00 0.00 7335.92 3208.53 17367.04 00:12:26.376 [2024-07-22 19:16:45.321853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.376 [2024-07-22 19:16:45.321871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.333891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.333908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.345921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.345943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.357963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.357980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.369986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.370003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.382023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.382040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.394058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.394075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.406086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.406105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.418098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.418114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.430143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.430160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.442160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.442176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.454209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.454225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.474255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.474273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.486278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.486294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.498313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.498330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.510354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.510372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.522367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.522383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.534408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.534424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.546442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.546458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.558479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.558497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.570504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.570521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.638 [2024-07-22 19:16:45.582523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.638 [2024-07-22 19:16:45.582543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.594563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.594579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.606591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.606607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.618615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.618631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.630657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.630673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.642679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.642699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.654723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.654738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.666758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.666775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.678792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.678809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.690827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.690843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.702851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.702867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.714871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.714887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.726911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.726928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.738936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.738952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.750977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.750993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.763002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.763018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.775023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.775039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.787075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.787091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.799106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.799122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.811119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.811136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.823161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.823177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.835191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.835211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.900 [2024-07-22 19:16:45.847223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.900 [2024-07-22 19:16:45.847239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.859256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.859272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.871282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.871300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.883317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.883334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.895361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.895377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.907372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.907389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.919404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.919421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.931427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.931444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.943466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.943482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.955498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.955514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.967520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.967535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.979572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.979588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:45.991598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:45.991615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 [2024-07-22 19:16:46.003620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.161 [2024-07-22 19:16:46.003637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2769204) - No such process 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2769204 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.161 delay0 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.161 19:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:27.161 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.422 [2024-07-22 19:16:46.178823] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:34.009 Initializing NVMe Controllers 00:12:34.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:34.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:34.009 Initialization complete. Launching workers. 00:12:34.009 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 411 00:12:34.009 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 698, failed to submit 33 00:12:34.009 success 542, unsuccess 156, failed 0 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.009 rmmod nvme_tcp 00:12:34.009 rmmod nvme_fabrics 00:12:34.009 rmmod nvme_keyring 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2766836 ']' 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2766836 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2766836 ']' 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2766836 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2766836 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2766836' 00:12:34.009 killing process with pid 2766836 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2766836 00:12:34.009 19:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2766836 00:12:34.269 19:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.269 19:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.269 19:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.269 19:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.269 19:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.269 19:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.269 19:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.269 19:16:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.815 00:12:36.815 real 0m35.450s 00:12:36.815 user 0m48.933s 00:12:36.815 sys 0m10.291s 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:36.815 ************************************ 00:12:36.815 END TEST nvmf_zcopy 00:12:36.815 ************************************ 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:36.815 ************************************ 00:12:36.815 START TEST nvmf_nmic 00:12:36.815 ************************************ 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:36.815 * Looking for test storage... 00:12:36.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.815 19:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:43.402 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:43.402 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:43.402 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:43.402 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.402 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.403 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:43.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:12:43.664 00:12:43.664 --- 10.0.0.2 ping statistics --- 00:12:43.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.664 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:12:43.664 00:12:43.664 --- 10.0.0.1 ping statistics --- 00:12:43.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.664 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2775960 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2775960 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2775960 ']' 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.664 19:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:43.925 [2024-07-22 19:17:02.629868] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:43.925 [2024-07-22 19:17:02.629990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.925 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.925 [2024-07-22 19:17:02.765069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.186 [2024-07-22 19:17:02.954729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.186 [2024-07-22 19:17:02.954773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.186 [2024-07-22 19:17:02.954785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.186 [2024-07-22 19:17:02.954795] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.186 [2024-07-22 19:17:02.954805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.186 [2024-07-22 19:17:02.955003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.186 [2024-07-22 19:17:02.955101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.186 [2024-07-22 19:17:02.955279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.186 [2024-07-22 19:17:02.955306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.447 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.447 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:12:44.447 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.447 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:44.447 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.707 [2024-07-22 19:17:03.427851] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.707 Malloc0 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.707 [2024-07-22 19:17:03.524506] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:44.707 test case1: single bdev can't be used in multiple subsystems 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.707 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.708 [2024-07-22 19:17:03.560425] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:44.708 [2024-07-22 19:17:03.560456] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:44.708 [2024-07-22 19:17:03.560474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.708 request: 00:12:44.708 { 00:12:44.708 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:44.708 "namespace": { 00:12:44.708 "bdev_name": "Malloc0", 00:12:44.708 "no_auto_visible": false 00:12:44.708 }, 00:12:44.708 "method": "nvmf_subsystem_add_ns", 00:12:44.708 "req_id": 1 00:12:44.708 } 00:12:44.708 Got JSON-RPC error response 00:12:44.708 response: 00:12:44.708 { 00:12:44.708 "code": -32602, 00:12:44.708 "message": "Invalid parameters" 00:12:44.708 } 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:44.708 Adding namespace failed - expected result. 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:44.708 test case2: host connect to nvmf target in multiple paths 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.708 [2024-07-22 19:17:03.572577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.708 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.622 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:48.006 19:17:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.006 19:17:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.006 19:17:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.006 19:17:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:48.006 19:17:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:49.919 19:17:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:49.919 19:17:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:49.919 19:17:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.919 19:17:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:49.919 19:17:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.919 19:17:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:49.919 19:17:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:49.919 [global] 00:12:49.919 thread=1 00:12:49.919 invalidate=1 00:12:49.919 rw=write 00:12:49.919 time_based=1 00:12:49.919 runtime=1 00:12:49.919 ioengine=libaio 00:12:49.919 direct=1 00:12:49.919 bs=4096 00:12:49.919 iodepth=1 00:12:49.919 norandommap=0 00:12:49.919 numjobs=1 00:12:49.919 00:12:49.919 verify_dump=1 00:12:49.919 verify_backlog=512 00:12:49.919 verify_state_save=0 00:12:49.919 do_verify=1 00:12:49.919 verify=crc32c-intel 00:12:49.919 [job0] 00:12:49.919 filename=/dev/nvme0n1 00:12:49.919 Could not set queue depth (nvme0n1) 00:12:50.179 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:50.179 fio-3.35 00:12:50.179 Starting 1 thread 00:12:51.651 00:12:51.651 job0: (groupid=0, jobs=1): err= 0: pid=2777427: Mon Jul 22 19:17:10 2024 00:12:51.651 read: IOPS=326, BW=1306KiB/s (1337kB/s)(1320KiB/1011msec) 00:12:51.651 slat (nsec): min=24568, max=60568, avg=25798.00, stdev=4155.32 00:12:51.651 clat (usec): min=825, max=42200, avg=1784.61, stdev=4982.46 00:12:51.651 lat (usec): min=850, max=42225, avg=1810.41, stdev=4982.65 00:12:51.651 clat percentiles (usec): 00:12:51.651 | 1.00th=[ 906], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1123], 00:12:51.651 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1205], 00:12:51.651 | 70.00th=[ 1221], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1287], 00:12:51.651 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:51.651 | 99.99th=[42206] 00:12:51.651 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:12:51.651 slat (usec): min=9, max=29021, avg=86.37, stdev=1281.30 00:12:51.651 clat (usec): min=323, max=1031, avg=708.12, stdev=102.97 00:12:51.651 lat (usec): min=334, max=29837, avg=794.49, stdev=1290.42 00:12:51.651 clat percentiles (usec): 00:12:51.651 | 1.00th=[ 424], 5.00th=[ 515], 10.00th=[ 570], 20.00th=[ 627], 00:12:51.651 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 725], 60.00th=[ 750], 00:12:51.651 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 816], 95.00th=[ 840], 00:12:51.651 | 99.00th=[ 922], 99.50th=[ 1029], 99.90th=[ 1029], 99.95th=[ 1029], 00:12:51.651 | 99.99th=[ 1029] 00:12:51.651 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:51.651 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:51.651 lat (usec) : 500=2.49%, 750=34.44%, 1000=25.18% 00:12:51.651 lat (msec) : 2=37.29%, 50=0.59% 00:12:51.651 cpu : usr=1.19%, sys=2.48%, ctx=845, majf=0, minf=1 00:12:51.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.651 issued rwts: total=330,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:51.651 00:12:51.651 Run status group 0 (all jobs): 00:12:51.651 READ: bw=1306KiB/s (1337kB/s), 1306KiB/s-1306KiB/s (1337kB/s-1337kB/s), io=1320KiB (1352kB), run=1011-1011msec 00:12:51.651 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:12:51.651 00:12:51.651 Disk stats (read/write): 00:12:51.651 nvme0n1: ios=379/512, merge=0/0, ticks=1025/348, in_queue=1373, util=98.90% 00:12:51.651 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:51.911 rmmod nvme_tcp 00:12:51.911 rmmod nvme_fabrics 00:12:51.911 rmmod nvme_keyring 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2775960 ']' 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2775960 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2775960 ']' 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2775960 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2775960 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:51.911 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2775960' 00:12:51.911 killing process with pid 2775960 00:12:51.912 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2775960 00:12:51.912 19:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2775960 00:12:52.853 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.853 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.853 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.853 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.853 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.853 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.853 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.853 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.399 00:12:55.399 real 0m18.548s 00:12:55.399 user 0m47.615s 00:12:55.399 sys 0m6.266s 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:55.399 ************************************ 00:12:55.399 END TEST nvmf_nmic 00:12:55.399 ************************************ 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:55.399 ************************************ 00:12:55.399 START TEST nvmf_fio_target 00:12:55.399 ************************************ 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:55.399 * Looking for test storage... 00:12:55.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.399 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:01.989 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:01.989 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:01.989 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:01.989 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.989 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.250 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.250 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.250 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:02.250 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.250 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.250 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.250 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:13:02.250 00:13:02.250 --- 10.0.0.2 ping statistics --- 00:13:02.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.250 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:13:02.250 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:13:02.250 00:13:02.250 --- 10.0.0.1 ping statistics --- 00:13:02.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.250 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2782098 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2782098 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2782098 ']' 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.512 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.512 [2024-07-22 19:17:21.354468] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:02.512 [2024-07-22 19:17:21.354591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.512 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.773 [2024-07-22 19:17:21.493178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.773 [2024-07-22 19:17:21.680971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.773 [2024-07-22 19:17:21.681013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.773 [2024-07-22 19:17:21.681027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.773 [2024-07-22 19:17:21.681037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.773 [2024-07-22 19:17:21.681047] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.773 [2024-07-22 19:17:21.681259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.773 [2024-07-22 19:17:21.681326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.773 [2024-07-22 19:17:21.681585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.773 [2024-07-22 19:17:21.681608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.352 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.352 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:03.352 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.352 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.352 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.352 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.352 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:03.352 [2024-07-22 19:17:22.271219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.613 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:03.613 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:03.613 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:03.873 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:03.873 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:04.133 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:04.133 19:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:04.393 19:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:04.393 19:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:04.393 19:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:04.653 19:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:04.653 19:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:04.914 19:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:04.914 19:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:05.174 19:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:05.174 19:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:05.435 19:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:05.435 19:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:05.435 19:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:05.696 19:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:05.696 19:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.696 19:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.956 [2024-07-22 19:17:24.781726] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.956 19:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:06.217 19:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:06.217 19:17:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.131 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:08.131 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.131 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.131 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:08.131 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:08.131 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:10.074 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:10.074 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:10.074 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.074 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:10.074 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.074 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:10.074 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:10.074 [global] 00:13:10.074 thread=1 00:13:10.074 invalidate=1 00:13:10.074 rw=write 00:13:10.074 time_based=1 00:13:10.074 runtime=1 00:13:10.074 ioengine=libaio 00:13:10.074 direct=1 00:13:10.074 bs=4096 00:13:10.074 iodepth=1 00:13:10.074 norandommap=0 00:13:10.074 numjobs=1 00:13:10.074 00:13:10.074 verify_dump=1 00:13:10.074 verify_backlog=512 00:13:10.074 verify_state_save=0 00:13:10.074 do_verify=1 00:13:10.074 verify=crc32c-intel 00:13:10.074 [job0] 00:13:10.074 filename=/dev/nvme0n1 00:13:10.074 [job1] 00:13:10.074 filename=/dev/nvme0n2 00:13:10.074 [job2] 00:13:10.074 filename=/dev/nvme0n3 00:13:10.074 [job3] 00:13:10.074 filename=/dev/nvme0n4 00:13:10.074 Could not set queue depth (nvme0n1) 00:13:10.074 Could not set queue depth (nvme0n2) 00:13:10.074 Could not set queue depth (nvme0n3) 00:13:10.074 Could not set queue depth (nvme0n4) 00:13:10.342 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.342 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.342 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.342 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.342 fio-3.35 00:13:10.342 Starting 4 threads 00:13:11.748 00:13:11.748 job0: (groupid=0, jobs=1): err= 0: pid=2783965: Mon Jul 22 19:17:30 2024 00:13:11.748 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:11.748 slat (nsec): min=6853, max=44486, avg=26085.55, stdev=3443.23 00:13:11.748 clat (usec): min=659, max=41882, avg=1024.94, stdev=1815.01 00:13:11.748 lat (usec): min=685, max=41909, avg=1051.03, stdev=1815.08 00:13:11.748 clat percentiles (usec): 00:13:11.748 | 1.00th=[ 758], 5.00th=[ 832], 10.00th=[ 857], 20.00th=[ 889], 00:13:11.748 | 30.00th=[ 906], 40.00th=[ 922], 50.00th=[ 930], 60.00th=[ 947], 00:13:11.748 | 70.00th=[ 963], 80.00th=[ 979], 90.00th=[ 1020], 95.00th=[ 1074], 00:13:11.748 | 99.00th=[ 1303], 99.50th=[ 1958], 99.90th=[41681], 99.95th=[41681], 00:13:11.748 | 99.99th=[41681] 00:13:11.748 write: IOPS=813, BW=3253KiB/s (3331kB/s)(3256KiB/1001msec); 0 zone resets 00:13:11.748 slat (usec): min=9, max=2541, avg=34.88, stdev=120.57 00:13:11.748 clat (usec): min=238, max=991, avg=520.75, stdev=89.88 00:13:11.748 lat (usec): min=250, max=3065, avg=555.63, stdev=152.60 00:13:11.748 clat percentiles (usec): 00:13:11.748 | 1.00th=[ 338], 5.00th=[ 371], 10.00th=[ 412], 20.00th=[ 441], 00:13:11.748 | 30.00th=[ 474], 40.00th=[ 506], 50.00th=[ 529], 60.00th=[ 553], 00:13:11.748 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 668], 00:13:11.748 | 99.00th=[ 791], 99.50th=[ 824], 99.90th=[ 988], 99.95th=[ 988], 00:13:11.748 | 99.99th=[ 988] 00:13:11.748 bw ( KiB/s): min= 4096, max= 4096, per=36.58%, avg=4096.00, stdev= 0.00, samples=1 00:13:11.748 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:11.748 lat (usec) : 250=0.15%, 500=23.08%, 750=37.41%, 1000=34.16% 00:13:11.748 lat (msec) : 2=5.05%, 4=0.08%, 50=0.08% 00:13:11.748 cpu : usr=1.30%, sys=4.40%, ctx=1329, majf=0, minf=1 00:13:11.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.748 issued rwts: total=512,814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.748 job1: (groupid=0, jobs=1): err= 0: pid=2783987: Mon Jul 22 19:17:30 2024 00:13:11.748 read: IOPS=13, BW=55.9KiB/s (57.2kB/s)(56.0KiB/1002msec) 00:13:11.748 slat (nsec): min=26060, max=27064, avg=26528.57, stdev=300.96 00:13:11.748 clat (usec): min=41801, max=43049, avg=42281.37, stdev=487.74 00:13:11.748 lat (usec): min=41827, max=43076, avg=42307.90, stdev=487.69 00:13:11.748 clat percentiles (usec): 00:13:11.748 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:13:11.748 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:11.748 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:13:11.748 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:11.748 | 99.99th=[43254] 00:13:11.748 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:13:11.748 slat (usec): min=9, max=3008, avg=43.03, stdev=146.35 00:13:11.748 clat (usec): min=300, max=1019, avg=749.86, stdev=135.30 00:13:11.748 lat (usec): min=334, max=3632, avg=792.88, stdev=197.16 00:13:11.748 clat percentiles (usec): 00:13:11.748 | 1.00th=[ 367], 5.00th=[ 490], 10.00th=[ 545], 20.00th=[ 644], 00:13:11.748 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 766], 60.00th=[ 807], 00:13:11.748 | 70.00th=[ 840], 80.00th=[ 865], 90.00th=[ 898], 95.00th=[ 938], 00:13:11.748 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1020], 99.95th=[ 1020], 00:13:11.748 | 99.99th=[ 1020] 00:13:11.748 bw ( KiB/s): min= 4096, max= 4096, per=36.58%, avg=4096.00, stdev= 0.00, samples=1 00:13:11.748 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:11.748 lat (usec) : 500=5.32%, 750=38.21%, 1000=53.42% 00:13:11.748 lat (msec) : 2=0.38%, 50=2.66% 00:13:11.748 cpu : usr=1.40%, sys=1.90%, ctx=529, majf=0, minf=1 00:13:11.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.748 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.748 job2: (groupid=0, jobs=1): err= 0: pid=2784008: Mon Jul 22 19:17:30 2024 00:13:11.748 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:11.748 slat (nsec): min=7979, max=45316, avg=27095.26, stdev=3904.98 00:13:11.748 clat (usec): min=374, max=1470, avg=936.33, stdev=79.96 00:13:11.748 lat (usec): min=394, max=1498, avg=963.42, stdev=80.68 00:13:11.748 clat percentiles (usec): 00:13:11.748 | 1.00th=[ 685], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 898], 00:13:11.748 | 30.00th=[ 914], 40.00th=[ 930], 50.00th=[ 938], 60.00th=[ 955], 00:13:11.748 | 70.00th=[ 971], 80.00th=[ 979], 90.00th=[ 1012], 95.00th=[ 1037], 00:13:11.748 | 99.00th=[ 1090], 99.50th=[ 1221], 99.90th=[ 1467], 99.95th=[ 1467], 00:13:11.748 | 99.99th=[ 1467] 00:13:11.748 write: IOPS=966, BW=3864KiB/s (3957kB/s)(3868KiB/1001msec); 0 zone resets 00:13:11.748 slat (nsec): min=9757, max=68875, avg=28560.98, stdev=11233.37 00:13:11.748 clat (usec): min=175, max=870, avg=484.49, stdev=89.13 00:13:11.748 lat (usec): min=185, max=883, avg=513.05, stdev=92.34 00:13:11.748 clat percentiles (usec): 00:13:11.748 | 1.00th=[ 215], 5.00th=[ 318], 10.00th=[ 379], 20.00th=[ 412], 00:13:11.748 | 30.00th=[ 437], 40.00th=[ 457], 50.00th=[ 506], 60.00th=[ 529], 00:13:11.748 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 586], 95.00th=[ 594], 00:13:11.748 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 873], 99.95th=[ 873], 00:13:11.748 | 99.99th=[ 873] 00:13:11.748 bw ( KiB/s): min= 4096, max= 4096, per=36.58%, avg=4096.00, stdev= 0.00, samples=1 00:13:11.748 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:11.748 lat (usec) : 250=0.81%, 500=31.10%, 750=33.87%, 1000=30.09% 00:13:11.748 lat (msec) : 2=4.12% 00:13:11.748 cpu : usr=1.80%, sys=4.50%, ctx=1480, majf=0, minf=1 00:13:11.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.748 issued rwts: total=512,967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.748 job3: (groupid=0, jobs=1): err= 0: pid=2784014: Mon Jul 22 19:17:30 2024 00:13:11.748 read: IOPS=186, BW=747KiB/s (765kB/s)(748KiB/1001msec) 00:13:11.748 slat (nsec): min=9188, max=41509, avg=25827.81, stdev=3435.00 00:13:11.748 clat (usec): min=822, max=43042, avg=3211.05, stdev=8871.12 00:13:11.748 lat (usec): min=847, max=43068, avg=3236.87, stdev=8870.45 00:13:11.748 clat percentiles (usec): 00:13:11.748 | 1.00th=[ 865], 5.00th=[ 906], 10.00th=[ 971], 20.00th=[ 1037], 00:13:11.748 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1188], 00:13:11.748 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1319], 95.00th=[19530], 00:13:11.749 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:11.749 | 99.99th=[43254] 00:13:11.749 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:13:11.749 slat (usec): min=10, max=2499, avg=36.27, stdev=109.55 00:13:11.749 clat (usec): min=325, max=1387, avg=725.68, stdev=127.21 00:13:11.749 lat (usec): min=358, max=3211, avg=761.95, stdev=169.87 00:13:11.749 clat percentiles (usec): 00:13:11.749 | 1.00th=[ 437], 5.00th=[ 529], 10.00th=[ 562], 20.00th=[ 619], 00:13:11.749 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 725], 60.00th=[ 758], 00:13:11.749 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 873], 95.00th=[ 914], 00:13:11.749 | 99.00th=[ 1106], 99.50th=[ 1172], 99.90th=[ 1385], 99.95th=[ 1385], 00:13:11.749 | 99.99th=[ 1385] 00:13:11.749 bw ( KiB/s): min= 4096, max= 4096, per=36.58%, avg=4096.00, stdev= 0.00, samples=1 00:13:11.749 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:11.749 lat (usec) : 500=2.29%, 750=40.34%, 1000=32.90% 00:13:11.749 lat (msec) : 2=23.03%, 20=0.14%, 50=1.29% 00:13:11.749 cpu : usr=0.70%, sys=2.40%, ctx=704, majf=0, minf=1 00:13:11.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.749 issued rwts: total=187,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.749 00:13:11.749 Run status group 0 (all jobs): 00:13:11.749 READ: bw=4890KiB/s (5008kB/s), 55.9KiB/s-2046KiB/s (57.2kB/s-2095kB/s), io=4900KiB (5018kB), run=1001-1002msec 00:13:11.749 WRITE: bw=10.9MiB/s (11.5MB/s), 2044KiB/s-3864KiB/s (2093kB/s-3957kB/s), io=11.0MiB (11.5MB), run=1001-1002msec 00:13:11.749 00:13:11.749 Disk stats (read/write): 00:13:11.749 nvme0n1: ios=558/521, merge=0/0, ticks=649/263, in_queue=912, util=86.97% 00:13:11.749 nvme0n2: ios=60/512, merge=0/0, ticks=572/303, in_queue=875, util=91.12% 00:13:11.749 nvme0n3: ios=569/640, merge=0/0, ticks=1092/311, in_queue=1403, util=92.39% 00:13:11.749 nvme0n4: ios=94/512, merge=0/0, ticks=599/362, in_queue=961, util=97.22% 00:13:11.749 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:11.749 [global] 00:13:11.749 thread=1 00:13:11.749 invalidate=1 00:13:11.749 rw=randwrite 00:13:11.749 time_based=1 00:13:11.749 runtime=1 00:13:11.749 ioengine=libaio 00:13:11.749 direct=1 00:13:11.749 bs=4096 00:13:11.749 iodepth=1 00:13:11.749 norandommap=0 00:13:11.749 numjobs=1 00:13:11.749 00:13:11.749 verify_dump=1 00:13:11.749 verify_backlog=512 00:13:11.749 verify_state_save=0 00:13:11.749 do_verify=1 00:13:11.749 verify=crc32c-intel 00:13:11.749 [job0] 00:13:11.749 filename=/dev/nvme0n1 00:13:11.749 [job1] 00:13:11.749 filename=/dev/nvme0n2 00:13:11.749 [job2] 00:13:11.749 filename=/dev/nvme0n3 00:13:11.749 [job3] 00:13:11.749 filename=/dev/nvme0n4 00:13:11.749 Could not set queue depth (nvme0n1) 00:13:11.749 Could not set queue depth (nvme0n2) 00:13:11.749 Could not set queue depth (nvme0n3) 00:13:11.749 Could not set queue depth (nvme0n4) 00:13:12.013 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.014 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.014 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.014 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.014 fio-3.35 00:13:12.014 Starting 4 threads 00:13:13.416 00:13:13.416 job0: (groupid=0, jobs=1): err= 0: pid=2784436: Mon Jul 22 19:17:31 2024 00:13:13.416 read: IOPS=15, BW=63.4KiB/s (64.9kB/s)(64.0KiB/1010msec) 00:13:13.416 slat (nsec): min=25595, max=26097, avg=25787.75, stdev=152.08 00:13:13.416 clat (usec): min=1170, max=42995, avg=39602.19, stdev=10256.76 00:13:13.416 lat (usec): min=1196, max=43021, avg=39627.98, stdev=10256.71 00:13:13.416 clat percentiles (usec): 00:13:13.416 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41681], 20.00th=[41681], 00:13:13.416 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:13.416 | 70.00th=[42206], 80.00th=[42206], 90.00th=[43254], 95.00th=[43254], 00:13:13.416 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:13.416 | 99.99th=[43254] 00:13:13.416 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:13:13.416 slat (nsec): min=8565, max=66799, avg=28606.41, stdev=9869.25 00:13:13.416 clat (usec): min=299, max=1738, avg=696.99, stdev=131.30 00:13:13.416 lat (usec): min=309, max=1778, avg=725.60, stdev=135.75 00:13:13.416 clat percentiles (usec): 00:13:13.416 | 1.00th=[ 388], 5.00th=[ 457], 10.00th=[ 529], 20.00th=[ 594], 00:13:13.416 | 30.00th=[ 644], 40.00th=[ 676], 50.00th=[ 709], 60.00th=[ 734], 00:13:13.416 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 881], 00:13:13.416 | 99.00th=[ 947], 99.50th=[ 971], 99.90th=[ 1745], 99.95th=[ 1745], 00:13:13.416 | 99.99th=[ 1745] 00:13:13.416 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:13:13.416 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:13.416 lat (usec) : 500=8.14%, 750=55.30%, 1000=33.14% 00:13:13.417 lat (msec) : 2=0.57%, 50=2.84% 00:13:13.417 cpu : usr=0.99%, sys=1.88%, ctx=528, majf=0, minf=1 00:13:13.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.417 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.417 job1: (groupid=0, jobs=1): err= 0: pid=2784453: Mon Jul 22 19:17:31 2024 00:13:13.417 read: IOPS=15, BW=63.2KiB/s (64.8kB/s)(64.0KiB/1012msec) 00:13:13.417 slat (nsec): min=24845, max=25598, avg=25153.94, stdev=175.49 00:13:13.417 clat (usec): min=1101, max=42996, avg=39585.84, stdev=10269.37 00:13:13.417 lat (usec): min=1127, max=43021, avg=39610.99, stdev=10269.25 00:13:13.417 clat percentiles (usec): 00:13:13.417 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41681], 20.00th=[41681], 00:13:13.417 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:13.417 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:13:13.417 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:13.417 | 99.99th=[43254] 00:13:13.417 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:13:13.417 slat (nsec): min=9387, max=81288, avg=28335.42, stdev=9218.24 00:13:13.417 clat (usec): min=343, max=936, avg=700.93, stdev=103.28 00:13:13.417 lat (usec): min=356, max=967, avg=729.27, stdev=107.29 00:13:13.417 clat percentiles (usec): 00:13:13.417 | 1.00th=[ 437], 5.00th=[ 515], 10.00th=[ 570], 20.00th=[ 619], 00:13:13.417 | 30.00th=[ 660], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 734], 00:13:13.417 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 824], 95.00th=[ 857], 00:13:13.417 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 938], 99.95th=[ 938], 00:13:13.417 | 99.99th=[ 938] 00:13:13.417 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:13:13.417 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:13.417 lat (usec) : 500=4.17%, 750=60.61%, 1000=32.20% 00:13:13.417 lat (msec) : 2=0.19%, 50=2.84% 00:13:13.417 cpu : usr=1.09%, sys=1.09%, ctx=530, majf=0, minf=1 00:13:13.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.417 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.417 job2: (groupid=0, jobs=1): err= 0: pid=2784474: Mon Jul 22 19:17:31 2024 00:13:13.417 read: IOPS=15, BW=63.1KiB/s (64.6kB/s)(64.0KiB/1014msec) 00:13:13.417 slat (nsec): min=25575, max=26753, avg=25877.25, stdev=293.22 00:13:13.417 clat (usec): min=1033, max=42514, avg=39438.95, stdev=10242.46 00:13:13.417 lat (usec): min=1059, max=42539, avg=39464.82, stdev=10242.48 00:13:13.417 clat percentiles (usec): 00:13:13.417 | 1.00th=[ 1037], 5.00th=[ 1037], 10.00th=[41681], 20.00th=[41681], 00:13:13.417 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:13.417 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:13:13.417 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:13:13.417 | 99.99th=[42730] 00:13:13.417 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:13:13.417 slat (nsec): min=8938, max=63009, avg=26676.57, stdev=9216.00 00:13:13.417 clat (usec): min=313, max=1163, avg=712.06, stdev=120.99 00:13:13.417 lat (usec): min=323, max=1195, avg=738.74, stdev=124.39 00:13:13.417 clat percentiles (usec): 00:13:13.417 | 1.00th=[ 392], 5.00th=[ 478], 10.00th=[ 553], 20.00th=[ 619], 00:13:13.417 | 30.00th=[ 660], 40.00th=[ 693], 50.00th=[ 717], 60.00th=[ 750], 00:13:13.417 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 857], 95.00th=[ 889], 00:13:13.417 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1172], 99.95th=[ 1172], 00:13:13.417 | 99.99th=[ 1172] 00:13:13.417 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:13:13.417 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:13.417 lat (usec) : 500=5.87%, 750=51.89%, 1000=39.02% 00:13:13.417 lat (msec) : 2=0.38%, 50=2.84% 00:13:13.417 cpu : usr=0.89%, sys=1.28%, ctx=528, majf=0, minf=1 00:13:13.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.417 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.417 job3: (groupid=0, jobs=1): err= 0: pid=2784486: Mon Jul 22 19:17:31 2024 00:13:13.417 read: IOPS=17, BW=70.0KiB/s (71.7kB/s)(72.0KiB/1029msec) 00:13:13.417 slat (nsec): min=24177, max=24943, avg=24452.44, stdev=214.89 00:13:13.417 clat (usec): min=989, max=43036, avg=40388.50, stdev=9847.81 00:13:13.417 lat (usec): min=1013, max=43060, avg=40412.96, stdev=9847.86 00:13:13.417 clat percentiles (usec): 00:13:13.417 | 1.00th=[ 988], 5.00th=[ 988], 10.00th=[41157], 20.00th=[42206], 00:13:13.417 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:13:13.417 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:13:13.417 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:13.417 | 99.99th=[43254] 00:13:13.417 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:13:13.417 slat (nsec): min=9172, max=48689, avg=26144.20, stdev=9165.61 00:13:13.417 clat (usec): min=289, max=823, avg=554.72, stdev=106.69 00:13:13.417 lat (usec): min=320, max=852, avg=580.86, stdev=110.09 00:13:13.417 clat percentiles (usec): 00:13:13.417 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 404], 20.00th=[ 461], 00:13:13.417 | 30.00th=[ 490], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 586], 00:13:13.417 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 725], 00:13:13.417 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 824], 99.95th=[ 824], 00:13:13.417 | 99.99th=[ 824] 00:13:13.417 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:13:13.417 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:13.417 lat (usec) : 500=31.32%, 750=63.21%, 1000=2.26% 00:13:13.417 lat (msec) : 50=3.21% 00:13:13.417 cpu : usr=0.78%, sys=1.26%, ctx=530, majf=0, minf=1 00:13:13.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.417 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.417 00:13:13.417 Run status group 0 (all jobs): 00:13:13.417 READ: bw=257KiB/s (263kB/s), 63.1KiB/s-70.0KiB/s (64.6kB/s-71.7kB/s), io=264KiB (270kB), run=1010-1029msec 00:13:13.417 WRITE: bw=7961KiB/s (8152kB/s), 1990KiB/s-2028KiB/s (2038kB/s-2076kB/s), io=8192KiB (8389kB), run=1010-1029msec 00:13:13.417 00:13:13.417 Disk stats (read/write): 00:13:13.417 nvme0n1: ios=34/512, merge=0/0, ticks=600/297, in_queue=897, util=96.29% 00:13:13.417 nvme0n2: ios=42/512, merge=0/0, ticks=1347/348, in_queue=1695, util=96.43% 00:13:13.417 nvme0n3: ios=63/512, merge=0/0, ticks=508/351, in_queue=859, util=92.52% 00:13:13.417 nvme0n4: ios=40/512, merge=0/0, ticks=816/269, in_queue=1085, util=91.79% 00:13:13.417 19:17:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:13.417 [global] 00:13:13.417 thread=1 00:13:13.417 invalidate=1 00:13:13.417 rw=write 00:13:13.417 time_based=1 00:13:13.417 runtime=1 00:13:13.417 ioengine=libaio 00:13:13.417 direct=1 00:13:13.417 bs=4096 00:13:13.417 iodepth=128 00:13:13.417 norandommap=0 00:13:13.417 numjobs=1 00:13:13.417 00:13:13.417 verify_dump=1 00:13:13.417 verify_backlog=512 00:13:13.417 verify_state_save=0 00:13:13.417 do_verify=1 00:13:13.417 verify=crc32c-intel 00:13:13.417 [job0] 00:13:13.417 filename=/dev/nvme0n1 00:13:13.417 [job1] 00:13:13.417 filename=/dev/nvme0n2 00:13:13.417 [job2] 00:13:13.417 filename=/dev/nvme0n3 00:13:13.417 [job3] 00:13:13.417 filename=/dev/nvme0n4 00:13:13.417 Could not set queue depth (nvme0n1) 00:13:13.417 Could not set queue depth (nvme0n2) 00:13:13.417 Could not set queue depth (nvme0n3) 00:13:13.417 Could not set queue depth (nvme0n4) 00:13:13.680 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:13.680 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:13.680 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:13.680 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:13.680 fio-3.35 00:13:13.680 Starting 4 threads 00:13:15.072 00:13:15.072 job0: (groupid=0, jobs=1): err= 0: pid=2784971: Mon Jul 22 19:17:33 2024 00:13:15.072 read: IOPS=4805, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1003msec) 00:13:15.072 slat (nsec): min=868, max=25109k, avg=94307.14, stdev=811968.52 00:13:15.072 clat (usec): min=1990, max=57225, avg=13307.58, stdev=8635.16 00:13:15.072 lat (usec): min=2556, max=60572, avg=13401.89, stdev=8695.22 00:13:15.072 clat percentiles (usec): 00:13:15.072 | 1.00th=[ 4555], 5.00th=[ 6521], 10.00th=[ 7373], 20.00th=[ 8160], 00:13:15.072 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[10421], 60.00th=[11731], 00:13:15.072 | 70.00th=[13435], 80.00th=[15664], 90.00th=[23987], 95.00th=[35914], 00:13:15.072 | 99.00th=[45876], 99.50th=[45876], 99.90th=[51119], 99.95th=[56886], 00:13:15.072 | 99.99th=[57410] 00:13:15.072 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:15.072 slat (nsec): min=1514, max=10467k, avg=96088.12, stdev=570817.04 00:13:15.072 clat (usec): min=3000, max=74219, avg=12295.78, stdev=12459.03 00:13:15.072 lat (usec): min=3010, max=74244, avg=12391.87, stdev=12549.47 00:13:15.072 clat percentiles (usec): 00:13:15.072 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 6259], 20.00th=[ 7308], 00:13:15.072 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 9241], 00:13:15.072 | 70.00th=[10159], 80.00th=[12518], 90.00th=[18744], 95.00th=[35390], 00:13:15.072 | 99.00th=[72877], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:13:15.072 | 99.99th=[73925] 00:13:15.072 bw ( KiB/s): min=17336, max=23624, per=19.54%, avg=20480.00, stdev=4446.29, samples=2 00:13:15.072 iops : min= 4334, max= 5906, avg=5120.00, stdev=1111.57, samples=2 00:13:15.072 lat (msec) : 2=0.01%, 4=0.62%, 10=57.68%, 20=31.25%, 50=8.16% 00:13:15.072 lat (msec) : 100=2.28% 00:13:15.072 cpu : usr=3.89%, sys=4.29%, ctx=467, majf=0, minf=1 00:13:15.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:15.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.072 issued rwts: total=4820,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.072 job1: (groupid=0, jobs=1): err= 0: pid=2784985: Mon Jul 22 19:17:33 2024 00:13:15.072 read: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec) 00:13:15.072 slat (nsec): min=915, max=9386.0k, avg=61313.84, stdev=440168.32 00:13:15.072 clat (usec): min=1592, max=24480, avg=8041.21, stdev=2326.08 00:13:15.072 lat (usec): min=1600, max=24486, avg=8102.52, stdev=2342.47 00:13:15.072 clat percentiles (usec): 00:13:15.072 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 5735], 20.00th=[ 6456], 00:13:15.072 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 7898], 00:13:15.072 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10683], 95.00th=[11994], 00:13:15.072 | 99.00th=[17433], 99.50th=[19006], 99.90th=[22938], 99.95th=[24511], 00:13:15.072 | 99.99th=[24511] 00:13:15.072 write: IOPS=8721, BW=34.1MiB/s (35.7MB/s)(34.2MiB/1003msec); 0 zone resets 00:13:15.072 slat (nsec): min=1670, max=7453.8k, avg=48097.16, stdev=293272.30 00:13:15.072 clat (usec): min=1181, max=14150, avg=6523.55, stdev=1551.41 00:13:15.072 lat (usec): min=1191, max=14153, avg=6571.65, stdev=1551.88 00:13:15.072 clat percentiles (usec): 00:13:15.072 | 1.00th=[ 2638], 5.00th=[ 3785], 10.00th=[ 4490], 20.00th=[ 5276], 00:13:15.072 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 6980], 00:13:15.072 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 8094], 95.00th=[ 9110], 00:13:15.072 | 99.00th=[10421], 99.50th=[12518], 99.90th=[12911], 99.95th=[13435], 00:13:15.072 | 99.99th=[14091] 00:13:15.072 bw ( KiB/s): min=32768, max=36864, per=33.22%, avg=34816.00, stdev=2896.31, samples=2 00:13:15.072 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:13:15.072 lat (msec) : 2=0.07%, 4=3.22%, 10=88.96%, 20=7.58%, 50=0.17% 00:13:15.072 cpu : usr=5.19%, sys=9.88%, ctx=747, majf=0, minf=1 00:13:15.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:15.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.072 issued rwts: total=8704,8748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.072 job2: (groupid=0, jobs=1): err= 0: pid=2785004: Mon Jul 22 19:17:33 2024 00:13:15.072 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:13:15.072 slat (nsec): min=995, max=11388k, avg=73120.27, stdev=521396.18 00:13:15.072 clat (usec): min=2486, max=24204, avg=9854.65, stdev=2711.22 00:13:15.072 lat (usec): min=4188, max=24210, avg=9927.77, stdev=2738.19 00:13:15.072 clat percentiles (usec): 00:13:15.072 | 1.00th=[ 5080], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 7767], 00:13:15.073 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9896], 00:13:15.073 | 70.00th=[10683], 80.00th=[11731], 90.00th=[13435], 95.00th=[14484], 00:13:15.073 | 99.00th=[18744], 99.50th=[21103], 99.90th=[24249], 99.95th=[24249], 00:13:15.073 | 99.99th=[24249] 00:13:15.073 write: IOPS=6782, BW=26.5MiB/s (27.8MB/s)(26.6MiB/1003msec); 0 zone resets 00:13:15.073 slat (nsec): min=1712, max=41291k, avg=69616.25, stdev=651831.46 00:13:15.073 clat (usec): min=2005, max=43771, avg=8334.14, stdev=3709.43 00:13:15.073 lat (usec): min=2392, max=43819, avg=8403.76, stdev=3744.51 00:13:15.073 clat percentiles (usec): 00:13:15.073 | 1.00th=[ 3589], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6128], 00:13:15.073 | 30.00th=[ 6587], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 7767], 00:13:15.073 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[11731], 95.00th=[15008], 00:13:15.073 | 99.00th=[22414], 99.50th=[25035], 99.90th=[43779], 99.95th=[43779], 00:13:15.073 | 99.99th=[43779] 00:13:15.073 bw ( KiB/s): min=24960, max=28664, per=25.59%, avg=26812.00, stdev=2619.12, samples=2 00:13:15.073 iops : min= 6240, max= 7166, avg=6703.00, stdev=654.78, samples=2 00:13:15.073 lat (msec) : 4=0.95%, 10=71.99%, 20=25.46%, 50=1.60% 00:13:15.073 cpu : usr=5.19%, sys=7.88%, ctx=413, majf=0, minf=1 00:13:15.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:15.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.073 issued rwts: total=6656,6803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.073 job3: (groupid=0, jobs=1): err= 0: pid=2785011: Mon Jul 22 19:17:33 2024 00:13:15.073 read: IOPS=5322, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1004msec) 00:13:15.073 slat (nsec): min=939, max=10625k, avg=93240.26, stdev=658251.20 00:13:15.073 clat (usec): min=2785, max=35424, avg=11849.40, stdev=5452.75 00:13:15.073 lat (usec): min=5226, max=35455, avg=11942.64, stdev=5504.16 00:13:15.073 clat percentiles (usec): 00:13:15.073 | 1.00th=[ 6390], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8356], 00:13:15.073 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[11076], 00:13:15.073 | 70.00th=[12125], 80.00th=[13698], 90.00th=[19792], 95.00th=[26346], 00:13:15.073 | 99.00th=[32113], 99.50th=[32113], 99.90th=[33817], 99.95th=[33817], 00:13:15.073 | 99.99th=[35390] 00:13:15.073 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:13:15.073 slat (nsec): min=1654, max=11883k, avg=83848.61, stdev=578762.83 00:13:15.073 clat (usec): min=819, max=32928, avg=11284.66, stdev=4847.33 00:13:15.073 lat (usec): min=838, max=32966, avg=11368.51, stdev=4894.50 00:13:15.073 clat percentiles (usec): 00:13:15.073 | 1.00th=[ 4228], 5.00th=[ 6849], 10.00th=[ 7832], 20.00th=[ 8160], 00:13:15.073 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[10421], 00:13:15.073 | 70.00th=[12256], 80.00th=[14746], 90.00th=[19792], 95.00th=[21103], 00:13:15.073 | 99.00th=[27919], 99.50th=[28967], 99.90th=[29754], 99.95th=[30540], 00:13:15.073 | 99.99th=[32900] 00:13:15.073 bw ( KiB/s): min=20480, max=24576, per=21.50%, avg=22528.00, stdev=2896.31, samples=2 00:13:15.073 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:13:15.073 lat (usec) : 1000=0.03% 00:13:15.073 lat (msec) : 2=0.06%, 4=0.18%, 10=55.33%, 20=34.96%, 50=9.44% 00:13:15.073 cpu : usr=3.99%, sys=5.48%, ctx=453, majf=0, minf=1 00:13:15.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:15.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.073 issued rwts: total=5344,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.073 00:13:15.073 Run status group 0 (all jobs): 00:13:15.073 READ: bw=99.3MiB/s (104MB/s), 18.8MiB/s-33.9MiB/s (19.7MB/s-35.5MB/s), io=99.7MiB (105MB), run=1003-1004msec 00:13:15.073 WRITE: bw=102MiB/s (107MB/s), 19.9MiB/s-34.1MiB/s (20.9MB/s-35.7MB/s), io=103MiB (108MB), run=1003-1004msec 00:13:15.073 00:13:15.073 Disk stats (read/write): 00:13:15.073 nvme0n1: ios=3634/3742, merge=0/0, ticks=40206/38005, in_queue=78211, util=96.79% 00:13:15.073 nvme0n2: ios=7188/7615, merge=0/0, ticks=55080/46993, in_queue=102073, util=97.04% 00:13:15.073 nvme0n3: ios=5430/5632, merge=0/0, ticks=51627/44912, in_queue=96539, util=99.79% 00:13:15.073 nvme0n4: ios=4637/4623, merge=0/0, ticks=30139/25433, in_queue=55572, util=99.15% 00:13:15.073 19:17:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:15.073 [global] 00:13:15.073 thread=1 00:13:15.073 invalidate=1 00:13:15.073 rw=randwrite 00:13:15.073 time_based=1 00:13:15.073 runtime=1 00:13:15.073 ioengine=libaio 00:13:15.073 direct=1 00:13:15.073 bs=4096 00:13:15.073 iodepth=128 00:13:15.073 norandommap=0 00:13:15.073 numjobs=1 00:13:15.073 00:13:15.073 verify_dump=1 00:13:15.073 verify_backlog=512 00:13:15.073 verify_state_save=0 00:13:15.073 do_verify=1 00:13:15.073 verify=crc32c-intel 00:13:15.073 [job0] 00:13:15.073 filename=/dev/nvme0n1 00:13:15.073 [job1] 00:13:15.073 filename=/dev/nvme0n2 00:13:15.073 [job2] 00:13:15.073 filename=/dev/nvme0n3 00:13:15.073 [job3] 00:13:15.073 filename=/dev/nvme0n4 00:13:15.073 Could not set queue depth (nvme0n1) 00:13:15.073 Could not set queue depth (nvme0n2) 00:13:15.073 Could not set queue depth (nvme0n3) 00:13:15.073 Could not set queue depth (nvme0n4) 00:13:15.335 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:15.335 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:15.335 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:15.335 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:15.335 fio-3.35 00:13:15.335 Starting 4 threads 00:13:16.742 00:13:16.742 job0: (groupid=0, jobs=1): err= 0: pid=2785499: Mon Jul 22 19:17:35 2024 00:13:16.742 read: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:13:16.742 slat (nsec): min=929, max=14776k, avg=66619.82, stdev=492529.68 00:13:16.742 clat (usec): min=2818, max=38640, avg=9229.67, stdev=3413.31 00:13:16.742 lat (usec): min=2855, max=45369, avg=9296.29, stdev=3443.63 00:13:16.742 clat percentiles (usec): 00:13:16.742 | 1.00th=[ 3720], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 6849], 00:13:16.742 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8586], 60.00th=[ 9110], 00:13:16.742 | 70.00th=[ 9765], 80.00th=[10814], 90.00th=[13304], 95.00th=[15664], 00:13:16.742 | 99.00th=[21627], 99.50th=[25560], 99.90th=[36963], 99.95th=[36963], 00:13:16.742 | 99.99th=[38536] 00:13:16.742 write: IOPS=7541, BW=29.5MiB/s (30.9MB/s)(29.6MiB/1005msec); 0 zone resets 00:13:16.742 slat (nsec): min=1595, max=9113.0k, avg=62944.90, stdev=436432.21 00:13:16.742 clat (usec): min=827, max=40770, avg=8016.97, stdev=4462.65 00:13:16.742 lat (usec): min=838, max=40772, avg=8079.92, stdev=4483.12 00:13:16.742 clat percentiles (usec): 00:13:16.742 | 1.00th=[ 3326], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5407], 00:13:16.742 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6980], 60.00th=[ 7439], 00:13:16.742 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[12125], 95.00th=[14353], 00:13:16.742 | 99.00th=[35390], 99.50th=[39060], 99.90th=[40633], 99.95th=[40633], 00:13:16.742 | 99.99th=[40633] 00:13:16.742 bw ( KiB/s): min=28672, max=30944, per=30.70%, avg=29808.00, stdev=1606.55, samples=2 00:13:16.742 iops : min= 7168, max= 7736, avg=7452.00, stdev=401.64, samples=2 00:13:16.742 lat (usec) : 1000=0.02% 00:13:16.742 lat (msec) : 2=0.04%, 4=2.05%, 10=76.32%, 20=19.90%, 50=1.67% 00:13:16.742 cpu : usr=5.68%, sys=7.67%, ctx=458, majf=0, minf=1 00:13:16.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:16.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:16.742 issued rwts: total=7168,7579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:16.742 job1: (groupid=0, jobs=1): err= 0: pid=2785511: Mon Jul 22 19:17:35 2024 00:13:16.742 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:13:16.742 slat (nsec): min=883, max=15670k, avg=113136.87, stdev=758603.84 00:13:16.742 clat (usec): min=3359, max=66543, avg=14393.88, stdev=7265.10 00:13:16.742 lat (usec): min=3366, max=66550, avg=14507.02, stdev=7334.54 00:13:16.742 clat percentiles (usec): 00:13:16.742 | 1.00th=[ 4621], 5.00th=[ 7177], 10.00th=[ 8160], 20.00th=[ 9110], 00:13:16.742 | 30.00th=[10683], 40.00th=[12125], 50.00th=[13829], 60.00th=[14746], 00:13:16.742 | 70.00th=[15533], 80.00th=[17433], 90.00th=[19268], 95.00th=[23462], 00:13:16.742 | 99.00th=[50594], 99.50th=[61080], 99.90th=[66323], 99.95th=[66323], 00:13:16.742 | 99.99th=[66323] 00:13:16.742 write: IOPS=4194, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1005msec); 0 zone resets 00:13:16.742 slat (nsec): min=1517, max=7770.8k, avg=107691.41, stdev=545386.05 00:13:16.742 clat (usec): min=809, max=66547, avg=16220.72, stdev=14681.67 00:13:16.742 lat (usec): min=818, max=66563, avg=16328.41, stdev=14766.71 00:13:16.742 clat percentiles (usec): 00:13:16.742 | 1.00th=[ 2966], 5.00th=[ 4555], 10.00th=[ 5604], 20.00th=[ 7111], 00:13:16.742 | 30.00th=[ 7898], 40.00th=[ 9110], 50.00th=[10814], 60.00th=[13829], 00:13:16.742 | 70.00th=[14877], 80.00th=[17171], 90.00th=[48497], 95.00th=[54264], 00:13:16.742 | 99.00th=[58459], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:13:16.742 | 99.99th=[66323] 00:13:16.742 bw ( KiB/s): min=16376, max=16392, per=16.87%, avg=16384.00, stdev=11.31, samples=2 00:13:16.742 iops : min= 4094, max= 4098, avg=4096.00, stdev= 2.83, samples=2 00:13:16.742 lat (usec) : 1000=0.04% 00:13:16.742 lat (msec) : 2=0.20%, 4=1.12%, 10=35.05%, 20=50.16%, 50=8.40% 00:13:16.742 lat (msec) : 100=5.03% 00:13:16.742 cpu : usr=2.89%, sys=4.38%, ctx=449, majf=0, minf=1 00:13:16.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:16.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:16.742 issued rwts: total=4096,4215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:16.742 job2: (groupid=0, jobs=1): err= 0: pid=2785529: Mon Jul 22 19:17:35 2024 00:13:16.742 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:13:16.742 slat (nsec): min=880, max=7195.5k, avg=83400.54, stdev=508201.34 00:13:16.743 clat (usec): min=4932, max=27314, avg=10236.09, stdev=3050.27 00:13:16.743 lat (usec): min=4936, max=27322, avg=10319.49, stdev=3093.62 00:13:16.743 clat percentiles (usec): 00:13:16.743 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 7635], 20.00th=[ 8291], 00:13:16.743 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9896], 00:13:16.743 | 70.00th=[10945], 80.00th=[12911], 90.00th=[14615], 95.00th=[15008], 00:13:16.743 | 99.00th=[21103], 99.50th=[25035], 99.90th=[26608], 99.95th=[27395], 00:13:16.743 | 99.99th=[27395] 00:13:16.743 write: IOPS=6495, BW=25.4MiB/s (26.6MB/s)(25.5MiB/1004msec); 0 zone resets 00:13:16.743 slat (nsec): min=1466, max=4514.3k, avg=70475.24, stdev=328727.74 00:13:16.743 clat (usec): min=1135, max=31045, avg=9890.78, stdev=3872.13 00:13:16.743 lat (usec): min=1146, max=31049, avg=9961.25, stdev=3900.05 00:13:16.743 clat percentiles (usec): 00:13:16.743 | 1.00th=[ 4883], 5.00th=[ 6456], 10.00th=[ 7504], 20.00th=[ 7963], 00:13:16.743 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8586], 00:13:16.743 | 70.00th=[ 9110], 80.00th=[11863], 90.00th=[14877], 95.00th=[17171], 00:13:16.743 | 99.00th=[25822], 99.50th=[28967], 99.90th=[31065], 99.95th=[31065], 00:13:16.743 | 99.99th=[31065] 00:13:16.743 bw ( KiB/s): min=22480, max=28672, per=26.34%, avg=25576.00, stdev=4378.41, samples=2 00:13:16.743 iops : min= 5620, max= 7168, avg=6394.00, stdev=1094.60, samples=2 00:13:16.743 lat (msec) : 2=0.09%, 4=0.09%, 10=68.05%, 20=29.43%, 50=2.33% 00:13:16.743 cpu : usr=3.39%, sys=5.38%, ctx=760, majf=0, minf=2 00:13:16.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:16.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:16.743 issued rwts: total=6144,6521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.743 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:16.743 job3: (groupid=0, jobs=1): err= 0: pid=2785535: Mon Jul 22 19:17:35 2024 00:13:16.743 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:13:16.743 slat (nsec): min=1003, max=9966.4k, avg=89328.73, stdev=671031.45 00:13:16.743 clat (usec): min=4754, max=22350, avg=11721.24, stdev=2836.17 00:13:16.743 lat (usec): min=4761, max=24012, avg=11810.57, stdev=2880.75 00:13:16.743 clat percentiles (usec): 00:13:16.743 | 1.00th=[ 6390], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 9634], 00:13:16.743 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11469], 00:13:16.743 | 70.00th=[12387], 80.00th=[14222], 90.00th=[16450], 95.00th=[17171], 00:13:16.743 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20579], 99.95th=[20579], 00:13:16.743 | 99.99th=[22414] 00:13:16.743 write: IOPS=6064, BW=23.7MiB/s (24.8MB/s)(23.8MiB/1003msec); 0 zone resets 00:13:16.743 slat (nsec): min=1663, max=9488.5k, avg=74857.11, stdev=504347.72 00:13:16.743 clat (usec): min=1226, max=20593, avg=10032.63, stdev=2854.02 00:13:16.743 lat (usec): min=1236, max=20597, avg=10107.48, stdev=2868.32 00:13:16.743 clat percentiles (usec): 00:13:16.743 | 1.00th=[ 3589], 5.00th=[ 5342], 10.00th=[ 6521], 20.00th=[ 7242], 00:13:16.743 | 30.00th=[ 8291], 40.00th=[ 9634], 50.00th=[10683], 60.00th=[11076], 00:13:16.743 | 70.00th=[11207], 80.00th=[11469], 90.00th=[14091], 95.00th=[15139], 00:13:16.743 | 99.00th=[16581], 99.50th=[18482], 99.90th=[20055], 99.95th=[20317], 00:13:16.743 | 99.99th=[20579] 00:13:16.743 bw ( KiB/s): min=23072, max=24576, per=24.53%, avg=23824.00, stdev=1063.49, samples=2 00:13:16.743 iops : min= 5768, max= 6144, avg=5956.00, stdev=265.87, samples=2 00:13:16.743 lat (msec) : 2=0.02%, 4=0.74%, 10=32.41%, 20=66.49%, 50=0.34% 00:13:16.743 cpu : usr=4.49%, sys=5.89%, ctx=503, majf=0, minf=1 00:13:16.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:16.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:16.743 issued rwts: total=5632,6083,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.743 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:16.743 00:13:16.743 Run status group 0 (all jobs): 00:13:16.743 READ: bw=89.6MiB/s (93.9MB/s), 15.9MiB/s-27.9MiB/s (16.7MB/s-29.2MB/s), io=90.0MiB (94.4MB), run=1003-1005msec 00:13:16.743 WRITE: bw=94.8MiB/s (99.4MB/s), 16.4MiB/s-29.5MiB/s (17.2MB/s-30.9MB/s), io=95.3MiB (99.9MB), run=1003-1005msec 00:13:16.743 00:13:16.743 Disk stats (read/write): 00:13:16.743 nvme0n1: ios=6250/6656, merge=0/0, ticks=53904/43364, in_queue=97268, util=96.49% 00:13:16.743 nvme0n2: ios=3104/3247, merge=0/0, ticks=36253/48273, in_queue=84526, util=87.04% 00:13:16.743 nvme0n3: ios=5147/5247, merge=0/0, ticks=26657/25058, in_queue=51715, util=91.75% 00:13:16.743 nvme0n4: ios=4666/5063, merge=0/0, ticks=53048/49225, in_queue=102273, util=100.00% 00:13:16.743 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:16.743 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2785620 00:13:16.743 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:16.743 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:16.743 [global] 00:13:16.743 thread=1 00:13:16.743 invalidate=1 00:13:16.743 rw=read 00:13:16.743 time_based=1 00:13:16.743 runtime=10 00:13:16.743 ioengine=libaio 00:13:16.743 direct=1 00:13:16.743 bs=4096 00:13:16.743 iodepth=1 00:13:16.743 norandommap=1 00:13:16.743 numjobs=1 00:13:16.743 00:13:16.743 [job0] 00:13:16.743 filename=/dev/nvme0n1 00:13:16.743 [job1] 00:13:16.743 filename=/dev/nvme0n2 00:13:16.743 [job2] 00:13:16.743 filename=/dev/nvme0n3 00:13:16.743 [job3] 00:13:16.743 filename=/dev/nvme0n4 00:13:16.743 Could not set queue depth (nvme0n1) 00:13:16.743 Could not set queue depth (nvme0n2) 00:13:16.743 Could not set queue depth (nvme0n3) 00:13:16.743 Could not set queue depth (nvme0n4) 00:13:17.092 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:17.092 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:17.092 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:17.092 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:17.092 fio-3.35 00:13:17.092 Starting 4 threads 00:13:19.663 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:19.663 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8032256, buflen=4096 00:13:19.663 fio: pid=2786007, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:19.663 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:19.925 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=10055680, buflen=4096 00:13:19.925 fio: pid=2785999, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:19.925 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:19.925 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:19.925 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=286720, buflen=4096 00:13:19.925 fio: pid=2785963, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:19.925 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:19.925 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:20.186 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=319488, buflen=4096 00:13:20.186 fio: pid=2785974, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:20.186 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:20.186 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:20.186 00:13:20.186 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2785963: Mon Jul 22 19:17:39 2024 00:13:20.186 read: IOPS=24, BW=95.0KiB/s (97.3kB/s)(280KiB/2947msec) 00:13:20.186 slat (usec): min=23, max=21468, avg=328.27, stdev=2544.71 00:13:20.186 clat (usec): min=1168, max=72708, avg=41460.27, stdev=7845.55 00:13:20.186 lat (usec): min=1197, max=72734, avg=41792.85, stdev=8255.94 00:13:20.186 clat percentiles (usec): 00:13:20.186 | 1.00th=[ 1172], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:13:20.186 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:20.186 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:13:20.186 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:13:20.186 | 99.99th=[72877] 00:13:20.186 bw ( KiB/s): min= 96, max= 104, per=1.66%, avg=97.60, stdev= 3.58, samples=5 00:13:20.186 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:13:20.186 lat (msec) : 2=2.82%, 50=94.37%, 100=1.41% 00:13:20.186 cpu : usr=0.00%, sys=0.10%, ctx=73, majf=0, minf=1 00:13:20.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.186 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.186 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:20.186 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2785974: Mon Jul 22 19:17:39 2024 00:13:20.186 read: IOPS=25, BW=99.9KiB/s (102kB/s)(312KiB/3124msec) 00:13:20.186 slat (usec): min=24, max=7727, avg=146.08, stdev=883.35 00:13:20.186 clat (usec): min=876, max=43079, avg=39621.09, stdev=10201.66 00:13:20.186 lat (usec): min=918, max=49964, avg=39747.83, stdev=10264.32 00:13:20.186 clat percentiles (usec): 00:13:20.186 | 1.00th=[ 873], 5.00th=[ 996], 10.00th=[41157], 20.00th=[41681], 00:13:20.186 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:20.186 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:13:20.186 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:20.186 | 99.99th=[43254] 00:13:20.186 bw ( KiB/s): min= 89, max= 104, per=1.71%, avg=100.17, stdev= 6.34, samples=6 00:13:20.186 iops : min= 22, max= 26, avg=25.00, stdev= 1.67, samples=6 00:13:20.186 lat (usec) : 1000=5.06% 00:13:20.186 lat (msec) : 2=1.27%, 50=92.41% 00:13:20.186 cpu : usr=0.10%, sys=0.00%, ctx=83, majf=0, minf=1 00:13:20.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.186 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.186 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:20.186 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2785999: Mon Jul 22 19:17:39 2024 00:13:20.186 read: IOPS=898, BW=3593KiB/s (3679kB/s)(9820KiB/2733msec) 00:13:20.186 slat (usec): min=7, max=12642, avg=33.85, stdev=289.68 00:13:20.186 clat (usec): min=580, max=2350, avg=1069.00, stdev=81.45 00:13:20.186 lat (usec): min=617, max=13757, avg=1102.85, stdev=301.84 00:13:20.186 clat percentiles (usec): 00:13:20.186 | 1.00th=[ 857], 5.00th=[ 938], 10.00th=[ 971], 20.00th=[ 1012], 00:13:20.186 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:13:20.186 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:13:20.186 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1303], 99.95th=[ 1319], 00:13:20.186 | 99.99th=[ 2343] 00:13:20.186 bw ( KiB/s): min= 3608, max= 3656, per=62.29%, avg=3640.00, stdev=19.60, samples=5 00:13:20.186 iops : min= 902, max= 914, avg=910.00, stdev= 4.90, samples=5 00:13:20.186 lat (usec) : 750=0.12%, 1000=17.26% 00:13:20.186 lat (msec) : 2=82.53%, 4=0.04% 00:13:20.186 cpu : usr=1.94%, sys=3.18%, ctx=2458, majf=0, minf=1 00:13:20.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.186 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.186 issued rwts: total=2456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:20.186 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2786007: Mon Jul 22 19:17:39 2024 00:13:20.186 read: IOPS=758, BW=3034KiB/s (3107kB/s)(7844KiB/2585msec) 00:13:20.186 slat (nsec): min=23360, max=98364, avg=25145.44, stdev=3811.33 00:13:20.186 clat (usec): min=697, max=1644, avg=1273.38, stdev=145.68 00:13:20.186 lat (usec): min=722, max=1668, avg=1298.52, stdev=145.73 00:13:20.186 clat percentiles (usec): 00:13:20.187 | 1.00th=[ 873], 5.00th=[ 1020], 10.00th=[ 1074], 20.00th=[ 1139], 00:13:20.187 | 30.00th=[ 1205], 40.00th=[ 1254], 50.00th=[ 1303], 60.00th=[ 1336], 00:13:20.187 | 70.00th=[ 1369], 80.00th=[ 1401], 90.00th=[ 1434], 95.00th=[ 1467], 00:13:20.187 | 99.00th=[ 1532], 99.50th=[ 1565], 99.90th=[ 1647], 99.95th=[ 1647], 00:13:20.187 | 99.99th=[ 1647] 00:13:20.187 bw ( KiB/s): min= 3040, max= 3104, per=52.50%, avg=3068.80, stdev=26.89, samples=5 00:13:20.187 iops : min= 760, max= 776, avg=767.20, stdev= 6.72, samples=5 00:13:20.187 lat (usec) : 750=0.05%, 1000=4.08% 00:13:20.187 lat (msec) : 2=95.82% 00:13:20.187 cpu : usr=0.93%, sys=2.13%, ctx=1963, majf=0, minf=2 00:13:20.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:20.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.187 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.187 issued rwts: total=1962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:20.187 00:13:20.187 Run status group 0 (all jobs): 00:13:20.187 READ: bw=5844KiB/s (5984kB/s), 95.0KiB/s-3593KiB/s (97.3kB/s-3679kB/s), io=17.8MiB (18.7MB), run=2585-3124msec 00:13:20.187 00:13:20.187 Disk stats (read/write): 00:13:20.187 nvme0n1: ios=68/0, merge=0/0, ticks=2789/0, in_queue=2789, util=93.96% 00:13:20.187 nvme0n2: ios=107/0, merge=0/0, ticks=3304/0, in_queue=3304, util=99.13% 00:13:20.187 nvme0n3: ios=2349/0, merge=0/0, ticks=2300/0, in_queue=2300, util=95.95% 00:13:20.187 nvme0n4: ios=1962/0, merge=0/0, ticks=2465/0, in_queue=2465, util=96.01% 00:13:20.448 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:20.448 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:20.708 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:20.708 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:20.968 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:20.968 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:20.968 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:20.969 19:17:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:21.230 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:21.230 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2785620 00:13:21.230 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:21.230 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.802 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.802 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:21.802 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:21.802 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:22.063 nvmf hotplug test: fio failed as expected 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.063 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.063 rmmod nvme_tcp 00:13:22.063 rmmod nvme_fabrics 00:13:22.063 rmmod nvme_keyring 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2782098 ']' 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2782098 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2782098 ']' 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2782098 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2782098 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2782098' 00:13:22.325 killing process with pid 2782098 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2782098 00:13:22.325 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2782098 00:13:23.268 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.268 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.268 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.268 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.268 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.268 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.268 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.268 19:17:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.184 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.184 00:13:25.184 real 0m30.139s 00:13:25.184 user 2m35.868s 00:13:25.184 sys 0m9.190s 00:13:25.184 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:25.184 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.184 ************************************ 00:13:25.184 END TEST nvmf_fio_target 00:13:25.184 ************************************ 00:13:25.184 19:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:25.184 19:17:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:25.184 19:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:25.184 19:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.184 19:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:25.184 ************************************ 00:13:25.184 START TEST nvmf_bdevio 00:13:25.184 ************************************ 00:13:25.184 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:25.446 * Looking for test storage... 00:13:25.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.446 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.447 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:32.041 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:32.042 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:32.042 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:32.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:32.042 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.042 19:17:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:32.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:13:32.303 00:13:32.303 --- 10.0.0.2 ping statistics --- 00:13:32.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.303 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:13:32.303 00:13:32.303 --- 10.0.0.1 ping statistics --- 00:13:32.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.303 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2791157 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2791157 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2791157 ']' 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.303 19:17:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:32.564 [2024-07-22 19:17:51.311090] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:32.564 [2024-07-22 19:17:51.311225] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.564 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.564 [2024-07-22 19:17:51.459605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.824 [2024-07-22 19:17:51.682986] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.824 [2024-07-22 19:17:51.683058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.824 [2024-07-22 19:17:51.683073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.824 [2024-07-22 19:17:51.683084] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.824 [2024-07-22 19:17:51.683097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.824 [2024-07-22 19:17:51.683318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:32.824 [2024-07-22 19:17:51.683517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:32.824 [2024-07-22 19:17:51.683657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.824 [2024-07-22 19:17:51.683683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:33.397 [2024-07-22 19:17:52.116325] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:33.397 Malloc0 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:33.397 [2024-07-22 19:17:52.221966] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:33.397 { 00:13:33.397 "params": { 00:13:33.397 "name": "Nvme$subsystem", 00:13:33.397 "trtype": "$TEST_TRANSPORT", 00:13:33.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:33.397 "adrfam": "ipv4", 00:13:33.397 "trsvcid": "$NVMF_PORT", 00:13:33.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:33.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:33.397 "hdgst": ${hdgst:-false}, 00:13:33.397 "ddgst": ${ddgst:-false} 00:13:33.397 }, 00:13:33.397 "method": "bdev_nvme_attach_controller" 00:13:33.397 } 00:13:33.397 EOF 00:13:33.397 )") 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:33.397 19:17:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:33.397 "params": { 00:13:33.397 "name": "Nvme1", 00:13:33.397 "trtype": "tcp", 00:13:33.397 "traddr": "10.0.0.2", 00:13:33.397 "adrfam": "ipv4", 00:13:33.397 "trsvcid": "4420", 00:13:33.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:33.397 "hdgst": false, 00:13:33.397 "ddgst": false 00:13:33.397 }, 00:13:33.397 "method": "bdev_nvme_attach_controller" 00:13:33.397 }' 00:13:33.397 [2024-07-22 19:17:52.307439] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:33.397 [2024-07-22 19:17:52.307561] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2791507 ] 00:13:33.659 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.659 [2024-07-22 19:17:52.435758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:33.919 [2024-07-22 19:17:52.618780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.919 [2024-07-22 19:17:52.618866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.919 [2024-07-22 19:17:52.618870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.179 I/O targets: 00:13:34.179 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:34.179 00:13:34.179 00:13:34.179 CUnit - A unit testing framework for C - Version 2.1-3 00:13:34.179 http://cunit.sourceforge.net/ 00:13:34.179 00:13:34.179 00:13:34.179 Suite: bdevio tests on: Nvme1n1 00:13:34.179 Test: blockdev write read block ...passed 00:13:34.180 Test: blockdev write zeroes read block ...passed 00:13:34.180 Test: blockdev write zeroes read no split ...passed 00:13:34.440 Test: blockdev write zeroes read split ...passed 00:13:34.440 Test: blockdev write zeroes read split partial ...passed 00:13:34.440 Test: blockdev reset ...[2024-07-22 19:17:53.257660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:34.440 [2024-07-22 19:17:53.257771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389080 (9): Bad file descriptor 00:13:34.701 [2024-07-22 19:17:53.404135] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:34.701 passed 00:13:34.701 Test: blockdev write read 8 blocks ...passed 00:13:34.701 Test: blockdev write read size > 128k ...passed 00:13:34.701 Test: blockdev write read invalid size ...passed 00:13:34.701 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:34.701 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:34.701 Test: blockdev write read max offset ...passed 00:13:34.701 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:34.701 Test: blockdev writev readv 8 blocks ...passed 00:13:34.701 Test: blockdev writev readv 30 x 1block ...passed 00:13:34.701 Test: blockdev writev readv block ...passed 00:13:34.701 Test: blockdev writev readv size > 128k ...passed 00:13:34.701 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:34.701 Test: blockdev comparev and writev ...[2024-07-22 19:17:53.633285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:34.701 [2024-07-22 19:17:53.633320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:34.701 [2024-07-22 19:17:53.633336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:34.701 [2024-07-22 19:17:53.633345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:34.701 [2024-07-22 19:17:53.633898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:34.701 [2024-07-22 19:17:53.633912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:34.701 [2024-07-22 19:17:53.633924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:34.701 [2024-07-22 19:17:53.633932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:34.701 [2024-07-22 19:17:53.634363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:34.701 [2024-07-22 19:17:53.634379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:34.701 [2024-07-22 19:17:53.634392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:34.701 [2024-07-22 19:17:53.634400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:34.701 [2024-07-22 19:17:53.634921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:34.701 [2024-07-22 19:17:53.634934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:34.701 [2024-07-22 19:17:53.634946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:34.701 [2024-07-22 19:17:53.634954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:34.961 passed 00:13:34.961 Test: blockdev nvme passthru rw ...passed 00:13:34.961 Test: blockdev nvme passthru vendor specific ...[2024-07-22 19:17:53.719095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:34.961 [2024-07-22 19:17:53.719117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:34.961 [2024-07-22 19:17:53.719520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:34.962 [2024-07-22 19:17:53.719535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:34.962 [2024-07-22 19:17:53.719961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:34.962 [2024-07-22 19:17:53.719972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:34.962 [2024-07-22 19:17:53.720374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:34.962 [2024-07-22 19:17:53.720386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:34.962 passed 00:13:34.962 Test: blockdev nvme admin passthru ...passed 00:13:34.962 Test: blockdev copy ...passed 00:13:34.962 00:13:34.962 Run Summary: Type Total Ran Passed Failed Inactive 00:13:34.962 suites 1 1 n/a 0 0 00:13:34.962 tests 23 23 23 0 0 00:13:34.962 asserts 152 152 152 0 n/a 00:13:34.962 00:13:34.962 Elapsed time = 1.710 seconds 00:13:35.532 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.532 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.532 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:35.532 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.532 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:35.532 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:35.532 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:35.532 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.803 rmmod nvme_tcp 00:13:35.803 rmmod nvme_fabrics 00:13:35.803 rmmod nvme_keyring 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2791157 ']' 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2791157 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2791157 ']' 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2791157 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2791157 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2791157' 00:13:35.803 killing process with pid 2791157 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2791157 00:13:35.803 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2791157 00:13:36.744 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.744 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.744 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.744 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.744 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.744 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.744 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.744 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.653 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.653 00:13:38.653 real 0m13.369s 00:13:38.653 user 0m20.241s 00:13:38.653 sys 0m6.035s 00:13:38.653 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.654 19:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:38.654 ************************************ 00:13:38.654 END TEST nvmf_bdevio 00:13:38.654 ************************************ 00:13:38.654 19:17:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:38.654 19:17:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:38.654 00:13:38.654 real 5m9.715s 00:13:38.654 user 12m14.400s 00:13:38.654 sys 1m44.898s 00:13:38.654 19:17:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.654 19:17:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:38.654 ************************************ 00:13:38.654 END TEST nvmf_target_core 00:13:38.654 ************************************ 00:13:38.654 19:17:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:38.654 19:17:57 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:38.654 19:17:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:38.654 19:17:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.654 19:17:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:38.654 ************************************ 00:13:38.654 START TEST nvmf_target_extra 00:13:38.654 ************************************ 00:13:38.654 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:38.916 * Looking for test storage... 00:13:38.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:38.916 ************************************ 00:13:38.916 START TEST nvmf_example 00:13:38.916 ************************************ 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:38.916 * Looking for test storage... 00:13:38.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.916 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:13:38.917 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:47.055 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:47.055 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.055 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:47.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:47.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:13:47.056 00:13:47.056 --- 10.0.0.2 ping statistics --- 00:13:47.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.056 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:13:47.056 00:13:47.056 --- 10.0.0.1 ping statistics --- 00:13:47.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.056 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2796330 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2796330 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2796330 ']' 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:47.056 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:47.056 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.056 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:47.057 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.057 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.057 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.057 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:47.057 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.057 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:47.057 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:47.057 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.358 Initializing NVMe Controllers 00:13:59.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:59.358 Initialization complete. Launching workers. 00:13:59.358 ======================================================== 00:13:59.358 Latency(us) 00:13:59.358 Device Information : IOPS MiB/s Average min max 00:13:59.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16923.40 66.11 3782.47 961.36 16301.54 00:13:59.358 ======================================================== 00:13:59.358 Total : 16923.40 66.11 3782.47 961.36 16301.54 00:13:59.358 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.358 rmmod nvme_tcp 00:13:59.358 rmmod nvme_fabrics 00:13:59.358 rmmod nvme_keyring 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2796330 ']' 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2796330 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2796330 ']' 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2796330 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2796330 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2796330' 00:13:59.358 killing process with pid 2796330 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 2796330 00:13:59.358 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 2796330 00:13:59.358 nvmf threads initialize successfully 00:13:59.358 bdev subsystem init successfully 00:13:59.358 created a nvmf target service 00:13:59.358 create targets's poll groups done 00:13:59.358 all subsystems of target started 00:13:59.358 nvmf target is running 00:13:59.358 all subsystems of target stopped 00:13:59.358 destroy targets's poll groups done 00:13:59.358 destroyed the nvmf target service 00:13:59.358 bdev subsystem finish successfully 00:13:59.358 nvmf threads destroy successfully 00:13:59.358 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.358 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.358 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.358 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.358 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.358 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.358 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.359 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:00.301 00:14:00.301 real 0m21.481s 00:14:00.301 user 0m48.360s 00:14:00.301 sys 0m6.406s 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:00.301 ************************************ 00:14:00.301 END TEST nvmf_example 00:14:00.301 ************************************ 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.301 19:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.564 ************************************ 00:14:00.564 START TEST nvmf_filesystem 00:14:00.564 ************************************ 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:00.564 * Looking for test storage... 00:14:00.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:00.564 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:00.565 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:00.565 #define SPDK_CONFIG_H 00:14:00.565 #define SPDK_CONFIG_APPS 1 00:14:00.565 #define SPDK_CONFIG_ARCH native 00:14:00.565 #define SPDK_CONFIG_ASAN 1 00:14:00.565 #undef SPDK_CONFIG_AVAHI 00:14:00.565 #undef SPDK_CONFIG_CET 00:14:00.565 #define SPDK_CONFIG_COVERAGE 1 00:14:00.565 #define SPDK_CONFIG_CROSS_PREFIX 00:14:00.565 #undef SPDK_CONFIG_CRYPTO 00:14:00.565 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:00.565 #undef SPDK_CONFIG_CUSTOMOCF 00:14:00.565 #undef SPDK_CONFIG_DAOS 00:14:00.565 #define SPDK_CONFIG_DAOS_DIR 00:14:00.565 #define SPDK_CONFIG_DEBUG 1 00:14:00.565 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:00.565 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:00.565 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:00.565 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:00.565 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:00.565 #undef SPDK_CONFIG_DPDK_UADK 00:14:00.565 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:00.565 #define SPDK_CONFIG_EXAMPLES 1 00:14:00.565 #undef SPDK_CONFIG_FC 00:14:00.565 #define SPDK_CONFIG_FC_PATH 00:14:00.565 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:00.565 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:00.565 #undef SPDK_CONFIG_FUSE 00:14:00.565 #undef SPDK_CONFIG_FUZZER 00:14:00.565 #define SPDK_CONFIG_FUZZER_LIB 00:14:00.565 #undef SPDK_CONFIG_GOLANG 00:14:00.565 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:00.565 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:00.565 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:00.565 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:00.565 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:00.565 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:00.565 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:00.565 #define SPDK_CONFIG_IDXD 1 00:14:00.565 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:00.565 #undef SPDK_CONFIG_IPSEC_MB 00:14:00.565 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:00.565 #define SPDK_CONFIG_ISAL 1 00:14:00.565 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:00.565 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:00.565 #define SPDK_CONFIG_LIBDIR 00:14:00.565 #undef SPDK_CONFIG_LTO 00:14:00.565 #define SPDK_CONFIG_MAX_LCORES 128 00:14:00.566 #define SPDK_CONFIG_NVME_CUSE 1 00:14:00.566 #undef SPDK_CONFIG_OCF 00:14:00.566 #define SPDK_CONFIG_OCF_PATH 00:14:00.566 #define SPDK_CONFIG_OPENSSL_PATH 00:14:00.566 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:00.566 #define SPDK_CONFIG_PGO_DIR 00:14:00.566 #undef SPDK_CONFIG_PGO_USE 00:14:00.566 #define SPDK_CONFIG_PREFIX /usr/local 00:14:00.566 #undef SPDK_CONFIG_RAID5F 00:14:00.566 #undef SPDK_CONFIG_RBD 00:14:00.566 #define SPDK_CONFIG_RDMA 1 00:14:00.566 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:00.566 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:00.566 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:00.566 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:00.566 #define SPDK_CONFIG_SHARED 1 00:14:00.566 #undef SPDK_CONFIG_SMA 00:14:00.566 #define SPDK_CONFIG_TESTS 1 00:14:00.566 #undef SPDK_CONFIG_TSAN 00:14:00.566 #define SPDK_CONFIG_UBLK 1 00:14:00.566 #define SPDK_CONFIG_UBSAN 1 00:14:00.566 #undef SPDK_CONFIG_UNIT_TESTS 00:14:00.566 #undef SPDK_CONFIG_URING 00:14:00.566 #define SPDK_CONFIG_URING_PATH 00:14:00.566 #undef SPDK_CONFIG_URING_ZNS 00:14:00.566 #undef SPDK_CONFIG_USDT 00:14:00.566 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:00.566 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:00.566 #undef SPDK_CONFIG_VFIO_USER 00:14:00.566 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:00.566 #define SPDK_CONFIG_VHOST 1 00:14:00.566 #define SPDK_CONFIG_VIRTIO 1 00:14:00.566 #undef SPDK_CONFIG_VTUNE 00:14:00.566 #define SPDK_CONFIG_VTUNE_DIR 00:14:00.566 #define SPDK_CONFIG_WERROR 1 00:14:00.566 #define SPDK_CONFIG_WPDK_DIR 00:14:00.566 #undef SPDK_CONFIG_XNVME 00:14:00.566 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:00.566 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:00.567 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2799595 ]] 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2799595 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Hd7Q04 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Hd7Q04/tests/target /tmp/spdk.Hd7Q04 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:14:00.568 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118519296000 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370976256 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10851680256 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64674230272 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685486080 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=11255808 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25850851328 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=23347200 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684691456 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=798720 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:14:00.569 * Looking for test storage... 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118519296000 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:14:00.569 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13066272768 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.831 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.832 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:07.420 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:07.420 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:07.420 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:07.420 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.420 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.681 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.681 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.681 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.681 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.681 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.681 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:14:07.682 00:14:07.682 --- 10.0.0.2 ping statistics --- 00:14:07.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.682 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:14:07.682 00:14:07.682 --- 10.0.0.1 ping statistics --- 00:14:07.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.682 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.682 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:07.943 ************************************ 00:14:07.943 START TEST nvmf_filesystem_no_in_capsule 00:14:07.943 ************************************ 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2803339 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2803339 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2803339 ']' 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.943 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:07.943 [2024-07-22 19:18:26.743670] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:07.943 [2024-07-22 19:18:26.743790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.943 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.943 [2024-07-22 19:18:26.878078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.204 [2024-07-22 19:18:27.067339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.204 [2024-07-22 19:18:27.067381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.204 [2024-07-22 19:18:27.067394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.204 [2024-07-22 19:18:27.067403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.204 [2024-07-22 19:18:27.067414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.204 [2024-07-22 19:18:27.067588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.204 [2024-07-22 19:18:27.067706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.204 [2024-07-22 19:18:27.067819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.204 [2024-07-22 19:18:27.067845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.775 [2024-07-22 19:18:27.535876] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.775 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.035 Malloc1 00:14:09.035 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.035 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:09.035 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.035 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.035 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.035 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:09.035 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.036 [2024-07-22 19:18:27.938338] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:09.036 { 00:14:09.036 "name": "Malloc1", 00:14:09.036 "aliases": [ 00:14:09.036 "8dffe84f-e06a-4a40-bee3-7e9262d5bea5" 00:14:09.036 ], 00:14:09.036 "product_name": "Malloc disk", 00:14:09.036 "block_size": 512, 00:14:09.036 "num_blocks": 1048576, 00:14:09.036 "uuid": "8dffe84f-e06a-4a40-bee3-7e9262d5bea5", 00:14:09.036 "assigned_rate_limits": { 00:14:09.036 "rw_ios_per_sec": 0, 00:14:09.036 "rw_mbytes_per_sec": 0, 00:14:09.036 "r_mbytes_per_sec": 0, 00:14:09.036 "w_mbytes_per_sec": 0 00:14:09.036 }, 00:14:09.036 "claimed": true, 00:14:09.036 "claim_type": "exclusive_write", 00:14:09.036 "zoned": false, 00:14:09.036 "supported_io_types": { 00:14:09.036 "read": true, 00:14:09.036 "write": true, 00:14:09.036 "unmap": true, 00:14:09.036 "flush": true, 00:14:09.036 "reset": true, 00:14:09.036 "nvme_admin": false, 00:14:09.036 "nvme_io": false, 00:14:09.036 "nvme_io_md": false, 00:14:09.036 "write_zeroes": true, 00:14:09.036 "zcopy": true, 00:14:09.036 "get_zone_info": false, 00:14:09.036 "zone_management": false, 00:14:09.036 "zone_append": false, 00:14:09.036 "compare": false, 00:14:09.036 "compare_and_write": false, 00:14:09.036 "abort": true, 00:14:09.036 "seek_hole": false, 00:14:09.036 "seek_data": false, 00:14:09.036 "copy": true, 00:14:09.036 "nvme_iov_md": false 00:14:09.036 }, 00:14:09.036 "memory_domains": [ 00:14:09.036 { 00:14:09.036 "dma_device_id": "system", 00:14:09.036 "dma_device_type": 1 00:14:09.036 }, 00:14:09.036 { 00:14:09.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.036 "dma_device_type": 2 00:14:09.036 } 00:14:09.036 ], 00:14:09.036 "driver_specific": {} 00:14:09.036 } 00:14:09.036 ]' 00:14:09.036 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:09.296 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:09.296 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:09.296 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:09.296 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:09.296 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:09.296 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:09.296 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.680 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.680 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:10.680 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.680 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:10.680 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:13.225 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:13.796 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:14.737 ************************************ 00:14:14.737 START TEST filesystem_ext4 00:14:14.737 ************************************ 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:14:14.737 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:14.737 mke2fs 1.46.5 (30-Dec-2021) 00:14:14.737 Discarding device blocks: 0/522240 done 00:14:14.737 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:14.737 Filesystem UUID: 9e5c3a6a-847d-4b20-a650-a175d8b7a189 00:14:14.737 Superblock backups stored on blocks: 00:14:14.737 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:14.737 00:14:14.737 Allocating group tables: 0/64 done 00:14:14.737 Writing inode tables: 0/64 done 00:14:16.648 Creating journal (8192 blocks): done 00:14:17.479 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:14:17.479 00:14:17.479 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:14:17.479 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:18.050 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:18.311 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:18.311 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:18.311 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2803339 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:18.312 00:14:18.312 real 0m3.529s 00:14:18.312 user 0m0.037s 00:14:18.312 sys 0m0.063s 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:18.312 ************************************ 00:14:18.312 END TEST filesystem_ext4 00:14:18.312 ************************************ 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.312 ************************************ 00:14:18.312 START TEST filesystem_btrfs 00:14:18.312 ************************************ 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:14:18.312 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:18.573 btrfs-progs v6.6.2 00:14:18.573 See https://btrfs.readthedocs.io for more information. 00:14:18.573 00:14:18.573 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:18.573 NOTE: several default settings have changed in version 5.15, please make sure 00:14:18.573 this does not affect your deployments: 00:14:18.573 - DUP for metadata (-m dup) 00:14:18.573 - enabled no-holes (-O no-holes) 00:14:18.573 - enabled free-space-tree (-R free-space-tree) 00:14:18.573 00:14:18.573 Label: (null) 00:14:18.573 UUID: 953a4b09-7547-4609-be70-044f9f42750f 00:14:18.573 Node size: 16384 00:14:18.573 Sector size: 4096 00:14:18.573 Filesystem size: 510.00MiB 00:14:18.573 Block group profiles: 00:14:18.573 Data: single 8.00MiB 00:14:18.573 Metadata: DUP 32.00MiB 00:14:18.573 System: DUP 8.00MiB 00:14:18.573 SSD detected: yes 00:14:18.573 Zoned device: no 00:14:18.573 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:18.573 Runtime features: free-space-tree 00:14:18.573 Checksum: crc32c 00:14:18.573 Number of devices: 1 00:14:18.573 Devices: 00:14:18.573 ID SIZE PATH 00:14:18.573 1 510.00MiB /dev/nvme0n1p1 00:14:18.573 00:14:18.573 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:14:18.573 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2803339 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:19.145 00:14:19.145 real 0m0.799s 00:14:19.145 user 0m0.025s 00:14:19.145 sys 0m0.133s 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.145 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:19.145 ************************************ 00:14:19.145 END TEST filesystem_btrfs 00:14:19.145 ************************************ 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:19.145 ************************************ 00:14:19.145 START TEST filesystem_xfs 00:14:19.145 ************************************ 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:14:19.145 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:19.406 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:19.406 = sectsz=512 attr=2, projid32bit=1 00:14:19.406 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:19.406 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:19.406 data = bsize=4096 blocks=130560, imaxpct=25 00:14:19.406 = sunit=0 swidth=0 blks 00:14:19.406 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:19.406 log =internal log bsize=4096 blocks=16384, version=2 00:14:19.406 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:19.406 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:20.348 Discarding blocks...Done. 00:14:20.348 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:14:20.348 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2803339 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:22.963 00:14:22.963 real 0m3.609s 00:14:22.963 user 0m0.023s 00:14:22.963 sys 0m0.077s 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:22.963 ************************************ 00:14:22.963 END TEST filesystem_xfs 00:14:22.963 ************************************ 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:22.963 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2803339 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2803339 ']' 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2803339 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2803339 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2803339' 00:14:23.535 killing process with pid 2803339 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2803339 00:14:23.535 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2803339 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:25.455 00:14:25.455 real 0m17.547s 00:14:25.455 user 1m7.523s 00:14:25.455 sys 0m1.401s 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:25.455 ************************************ 00:14:25.455 END TEST nvmf_filesystem_no_in_capsule 00:14:25.455 ************************************ 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.455 ************************************ 00:14:25.455 START TEST nvmf_filesystem_in_capsule 00:14:25.455 ************************************ 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2807049 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2807049 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2807049 ']' 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.455 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:25.455 [2024-07-22 19:18:44.374554] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:25.455 [2024-07-22 19:18:44.374687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.716 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.716 [2024-07-22 19:18:44.506536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.977 [2024-07-22 19:18:44.690859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.977 [2024-07-22 19:18:44.690904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.977 [2024-07-22 19:18:44.690917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.977 [2024-07-22 19:18:44.690926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.977 [2024-07-22 19:18:44.690937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.977 [2024-07-22 19:18:44.691123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.977 [2024-07-22 19:18:44.691216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.977 [2024-07-22 19:18:44.691329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.977 [2024-07-22 19:18:44.691356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.238 [2024-07-22 19:18:45.155853] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.238 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.808 Malloc1 00:14:26.808 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.808 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:26.808 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.808 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.808 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.808 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 [2024-07-22 19:18:45.559834] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:26.809 { 00:14:26.809 "name": "Malloc1", 00:14:26.809 "aliases": [ 00:14:26.809 "fd6f9794-e990-49ca-9a48-ba8aa356ce89" 00:14:26.809 ], 00:14:26.809 "product_name": "Malloc disk", 00:14:26.809 "block_size": 512, 00:14:26.809 "num_blocks": 1048576, 00:14:26.809 "uuid": "fd6f9794-e990-49ca-9a48-ba8aa356ce89", 00:14:26.809 "assigned_rate_limits": { 00:14:26.809 "rw_ios_per_sec": 0, 00:14:26.809 "rw_mbytes_per_sec": 0, 00:14:26.809 "r_mbytes_per_sec": 0, 00:14:26.809 "w_mbytes_per_sec": 0 00:14:26.809 }, 00:14:26.809 "claimed": true, 00:14:26.809 "claim_type": "exclusive_write", 00:14:26.809 "zoned": false, 00:14:26.809 "supported_io_types": { 00:14:26.809 "read": true, 00:14:26.809 "write": true, 00:14:26.809 "unmap": true, 00:14:26.809 "flush": true, 00:14:26.809 "reset": true, 00:14:26.809 "nvme_admin": false, 00:14:26.809 "nvme_io": false, 00:14:26.809 "nvme_io_md": false, 00:14:26.809 "write_zeroes": true, 00:14:26.809 "zcopy": true, 00:14:26.809 "get_zone_info": false, 00:14:26.809 "zone_management": false, 00:14:26.809 "zone_append": false, 00:14:26.809 "compare": false, 00:14:26.809 "compare_and_write": false, 00:14:26.809 "abort": true, 00:14:26.809 "seek_hole": false, 00:14:26.809 "seek_data": false, 00:14:26.809 "copy": true, 00:14:26.809 "nvme_iov_md": false 00:14:26.809 }, 00:14:26.809 "memory_domains": [ 00:14:26.809 { 00:14:26.809 "dma_device_id": "system", 00:14:26.809 "dma_device_type": 1 00:14:26.809 }, 00:14:26.809 { 00:14:26.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.809 "dma_device_type": 2 00:14:26.809 } 00:14:26.809 ], 00:14:26.809 "driver_specific": {} 00:14:26.809 } 00:14:26.809 ]' 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:26.809 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.720 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.720 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:28.720 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.720 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:28.720 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:30.631 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:30.891 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:31.832 ************************************ 00:14:31.832 START TEST filesystem_in_capsule_ext4 00:14:31.832 ************************************ 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:14:31.832 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:31.832 mke2fs 1.46.5 (30-Dec-2021) 00:14:32.092 Discarding device blocks: 0/522240 done 00:14:32.092 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:32.092 Filesystem UUID: 3cad42a2-a116-4315-b19d-f129a5316088 00:14:32.092 Superblock backups stored on blocks: 00:14:32.092 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:32.092 00:14:32.092 Allocating group tables: 0/64 done 00:14:32.092 Writing inode tables: 0/64 done 00:14:34.651 Creating journal (8192 blocks): done 00:14:34.651 Writing superblocks and filesystem accounting information: 0/64 done 00:14:34.651 00:14:34.651 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:14:34.651 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:34.912 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:34.912 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:34.912 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:34.912 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:34.912 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:34.912 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2807049 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:35.173 00:14:35.173 real 0m3.131s 00:14:35.173 user 0m0.031s 00:14:35.173 sys 0m0.066s 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:35.173 ************************************ 00:14:35.173 END TEST filesystem_in_capsule_ext4 00:14:35.173 ************************************ 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:35.173 ************************************ 00:14:35.173 START TEST filesystem_in_capsule_btrfs 00:14:35.173 ************************************ 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:14:35.173 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:35.434 btrfs-progs v6.6.2 00:14:35.434 See https://btrfs.readthedocs.io for more information. 00:14:35.434 00:14:35.434 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:35.434 NOTE: several default settings have changed in version 5.15, please make sure 00:14:35.434 this does not affect your deployments: 00:14:35.434 - DUP for metadata (-m dup) 00:14:35.434 - enabled no-holes (-O no-holes) 00:14:35.434 - enabled free-space-tree (-R free-space-tree) 00:14:35.434 00:14:35.434 Label: (null) 00:14:35.434 UUID: c5aacce0-4a24-4250-a6a6-4e57d631d262 00:14:35.434 Node size: 16384 00:14:35.434 Sector size: 4096 00:14:35.434 Filesystem size: 510.00MiB 00:14:35.434 Block group profiles: 00:14:35.434 Data: single 8.00MiB 00:14:35.434 Metadata: DUP 32.00MiB 00:14:35.434 System: DUP 8.00MiB 00:14:35.434 SSD detected: yes 00:14:35.434 Zoned device: no 00:14:35.434 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:35.434 Runtime features: free-space-tree 00:14:35.434 Checksum: crc32c 00:14:35.434 Number of devices: 1 00:14:35.434 Devices: 00:14:35.434 ID SIZE PATH 00:14:35.434 1 510.00MiB /dev/nvme0n1p1 00:14:35.434 00:14:35.434 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:14:35.434 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2807049 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:36.006 00:14:36.006 real 0m0.798s 00:14:36.006 user 0m0.034s 00:14:36.006 sys 0m0.128s 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:36.006 ************************************ 00:14:36.006 END TEST filesystem_in_capsule_btrfs 00:14:36.006 ************************************ 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.006 ************************************ 00:14:36.006 START TEST filesystem_in_capsule_xfs 00:14:36.006 ************************************ 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:14:36.006 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:36.006 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:36.006 = sectsz=512 attr=2, projid32bit=1 00:14:36.006 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:36.006 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:36.006 data = bsize=4096 blocks=130560, imaxpct=25 00:14:36.006 = sunit=0 swidth=0 blks 00:14:36.006 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:36.006 log =internal log bsize=4096 blocks=16384, version=2 00:14:36.006 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:36.006 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:36.948 Discarding blocks...Done. 00:14:36.948 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:14:36.948 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2807049 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:38.861 00:14:38.861 real 0m2.863s 00:14:38.861 user 0m0.029s 00:14:38.861 sys 0m0.074s 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:38.861 ************************************ 00:14:38.861 END TEST filesystem_in_capsule_xfs 00:14:38.861 ************************************ 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:38.861 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:39.122 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:39.383 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2807049 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2807049 ']' 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2807049 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2807049 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2807049' 00:14:39.644 killing process with pid 2807049 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2807049 00:14:39.644 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2807049 00:14:41.558 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:41.558 00:14:41.558 real 0m16.073s 00:14:41.558 user 1m1.662s 00:14:41.558 sys 0m1.379s 00:14:41.558 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.558 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:41.558 ************************************ 00:14:41.558 END TEST nvmf_filesystem_in_capsule 00:14:41.558 ************************************ 00:14:41.558 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.559 rmmod nvme_tcp 00:14:41.559 rmmod nvme_fabrics 00:14:41.559 rmmod nvme_keyring 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.559 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:44.105 00:14:44.105 real 0m43.252s 00:14:44.105 user 2m11.384s 00:14:44.105 sys 0m8.146s 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.105 ************************************ 00:14:44.105 END TEST nvmf_filesystem 00:14:44.105 ************************************ 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.105 ************************************ 00:14:44.105 START TEST nvmf_target_discovery 00:14:44.105 ************************************ 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:44.105 * Looking for test storage... 00:14:44.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.105 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:14:44.106 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:50.696 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:50.696 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:50.696 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.696 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:50.697 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.697 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:14:51.056 00:14:51.056 --- 10.0.0.2 ping statistics --- 00:14:51.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.056 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.432 ms 00:14:51.056 00:14:51.056 --- 10.0.0.1 ping statistics --- 00:14:51.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.056 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.056 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2814355 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2814355 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2814355 ']' 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.057 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.057 [2024-07-22 19:19:09.930150] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:51.057 [2024-07-22 19:19:09.930257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.318 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.318 [2024-07-22 19:19:10.051432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.318 [2024-07-22 19:19:10.236106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.318 [2024-07-22 19:19:10.236151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.318 [2024-07-22 19:19:10.236164] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.318 [2024-07-22 19:19:10.236174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.318 [2024-07-22 19:19:10.236184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.318 [2024-07-22 19:19:10.236355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.318 [2024-07-22 19:19:10.236437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.318 [2024-07-22 19:19:10.236552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.318 [2024-07-22 19:19:10.236578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 [2024-07-22 19:19:10.723973] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 Null1 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 [2024-07-22 19:19:10.784347] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 Null2 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.889 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.890 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.890 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:51.890 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:51.890 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.890 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 Null3 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.150 Null4 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.150 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.151 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:52.151 00:14:52.151 Discovery Log Number of Records 6, Generation counter 6 00:14:52.151 =====Discovery Log Entry 0====== 00:14:52.151 trtype: tcp 00:14:52.151 adrfam: ipv4 00:14:52.151 subtype: current discovery subsystem 00:14:52.151 treq: not required 00:14:52.151 portid: 0 00:14:52.151 trsvcid: 4420 00:14:52.151 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:52.151 traddr: 10.0.0.2 00:14:52.151 eflags: explicit discovery connections, duplicate discovery information 00:14:52.151 sectype: none 00:14:52.151 =====Discovery Log Entry 1====== 00:14:52.151 trtype: tcp 00:14:52.151 adrfam: ipv4 00:14:52.151 subtype: nvme subsystem 00:14:52.151 treq: not required 00:14:52.151 portid: 0 00:14:52.151 trsvcid: 4420 00:14:52.151 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:52.151 traddr: 10.0.0.2 00:14:52.151 eflags: none 00:14:52.151 sectype: none 00:14:52.151 =====Discovery Log Entry 2====== 00:14:52.151 trtype: tcp 00:14:52.151 adrfam: ipv4 00:14:52.151 subtype: nvme subsystem 00:14:52.151 treq: not required 00:14:52.151 portid: 0 00:14:52.151 trsvcid: 4420 00:14:52.151 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:52.151 traddr: 10.0.0.2 00:14:52.151 eflags: none 00:14:52.151 sectype: none 00:14:52.151 =====Discovery Log Entry 3====== 00:14:52.151 trtype: tcp 00:14:52.151 adrfam: ipv4 00:14:52.151 subtype: nvme subsystem 00:14:52.151 treq: not required 00:14:52.151 portid: 0 00:14:52.151 trsvcid: 4420 00:14:52.151 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:52.151 traddr: 10.0.0.2 00:14:52.151 eflags: none 00:14:52.151 sectype: none 00:14:52.151 =====Discovery Log Entry 4====== 00:14:52.151 trtype: tcp 00:14:52.151 adrfam: ipv4 00:14:52.151 subtype: nvme subsystem 00:14:52.151 treq: not required 00:14:52.151 portid: 0 00:14:52.151 trsvcid: 4420 00:14:52.151 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:52.151 traddr: 10.0.0.2 00:14:52.151 eflags: none 00:14:52.151 sectype: none 00:14:52.151 =====Discovery Log Entry 5====== 00:14:52.151 trtype: tcp 00:14:52.151 adrfam: ipv4 00:14:52.151 subtype: discovery subsystem referral 00:14:52.151 treq: not required 00:14:52.151 portid: 0 00:14:52.151 trsvcid: 4430 00:14:52.151 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:52.151 traddr: 10.0.0.2 00:14:52.151 eflags: none 00:14:52.151 sectype: none 00:14:52.151 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:52.151 Perform nvmf subsystem discovery via RPC 00:14:52.151 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:52.151 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.151 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.151 [ 00:14:52.151 { 00:14:52.151 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:52.151 "subtype": "Discovery", 00:14:52.151 "listen_addresses": [ 00:14:52.151 { 00:14:52.151 "trtype": "TCP", 00:14:52.151 "adrfam": "IPv4", 00:14:52.151 "traddr": "10.0.0.2", 00:14:52.151 "trsvcid": "4420" 00:14:52.151 } 00:14:52.151 ], 00:14:52.151 "allow_any_host": true, 00:14:52.151 "hosts": [] 00:14:52.151 }, 00:14:52.151 { 00:14:52.151 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.151 "subtype": "NVMe", 00:14:52.151 "listen_addresses": [ 00:14:52.151 { 00:14:52.151 "trtype": "TCP", 00:14:52.151 "adrfam": "IPv4", 00:14:52.151 "traddr": "10.0.0.2", 00:14:52.151 "trsvcid": "4420" 00:14:52.151 } 00:14:52.151 ], 00:14:52.151 "allow_any_host": true, 00:14:52.151 "hosts": [], 00:14:52.151 "serial_number": "SPDK00000000000001", 00:14:52.151 "model_number": "SPDK bdev Controller", 00:14:52.151 "max_namespaces": 32, 00:14:52.151 "min_cntlid": 1, 00:14:52.151 "max_cntlid": 65519, 00:14:52.151 "namespaces": [ 00:14:52.151 { 00:14:52.151 "nsid": 1, 00:14:52.151 "bdev_name": "Null1", 00:14:52.151 "name": "Null1", 00:14:52.151 "nguid": "A8FFF644045849039B56F0F844035FBB", 00:14:52.151 "uuid": "a8fff644-0458-4903-9b56-f0f844035fbb" 00:14:52.151 } 00:14:52.151 ] 00:14:52.151 }, 00:14:52.151 { 00:14:52.151 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:52.151 "subtype": "NVMe", 00:14:52.151 "listen_addresses": [ 00:14:52.151 { 00:14:52.151 "trtype": "TCP", 00:14:52.151 "adrfam": "IPv4", 00:14:52.151 "traddr": "10.0.0.2", 00:14:52.151 "trsvcid": "4420" 00:14:52.151 } 00:14:52.151 ], 00:14:52.151 "allow_any_host": true, 00:14:52.151 "hosts": [], 00:14:52.151 "serial_number": "SPDK00000000000002", 00:14:52.151 "model_number": "SPDK bdev Controller", 00:14:52.151 "max_namespaces": 32, 00:14:52.151 "min_cntlid": 1, 00:14:52.151 "max_cntlid": 65519, 00:14:52.151 "namespaces": [ 00:14:52.151 { 00:14:52.151 "nsid": 1, 00:14:52.151 "bdev_name": "Null2", 00:14:52.151 "name": "Null2", 00:14:52.151 "nguid": "A09DE964391E479B92AC12A9CFBCF392", 00:14:52.151 "uuid": "a09de964-391e-479b-92ac-12a9cfbcf392" 00:14:52.151 } 00:14:52.151 ] 00:14:52.151 }, 00:14:52.151 { 00:14:52.151 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:52.151 "subtype": "NVMe", 00:14:52.151 "listen_addresses": [ 00:14:52.151 { 00:14:52.151 "trtype": "TCP", 00:14:52.151 "adrfam": "IPv4", 00:14:52.151 "traddr": "10.0.0.2", 00:14:52.151 "trsvcid": "4420" 00:14:52.151 } 00:14:52.151 ], 00:14:52.151 "allow_any_host": true, 00:14:52.151 "hosts": [], 00:14:52.151 "serial_number": "SPDK00000000000003", 00:14:52.151 "model_number": "SPDK bdev Controller", 00:14:52.151 "max_namespaces": 32, 00:14:52.151 "min_cntlid": 1, 00:14:52.151 "max_cntlid": 65519, 00:14:52.151 "namespaces": [ 00:14:52.151 { 00:14:52.151 "nsid": 1, 00:14:52.151 "bdev_name": "Null3", 00:14:52.151 "name": "Null3", 00:14:52.151 "nguid": "C75BFEED8A6C4DAC81165FBBBEEB5F19", 00:14:52.151 "uuid": "c75bfeed-8a6c-4dac-8116-5fbbbeeb5f19" 00:14:52.151 } 00:14:52.151 ] 00:14:52.151 }, 00:14:52.151 { 00:14:52.151 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:52.151 "subtype": "NVMe", 00:14:52.151 "listen_addresses": [ 00:14:52.151 { 00:14:52.151 "trtype": "TCP", 00:14:52.151 "adrfam": "IPv4", 00:14:52.151 "traddr": "10.0.0.2", 00:14:52.151 "trsvcid": "4420" 00:14:52.151 } 00:14:52.151 ], 00:14:52.151 "allow_any_host": true, 00:14:52.151 "hosts": [], 00:14:52.151 "serial_number": "SPDK00000000000004", 00:14:52.151 "model_number": "SPDK bdev Controller", 00:14:52.151 "max_namespaces": 32, 00:14:52.151 "min_cntlid": 1, 00:14:52.151 "max_cntlid": 65519, 00:14:52.151 "namespaces": [ 00:14:52.151 { 00:14:52.151 "nsid": 1, 00:14:52.151 "bdev_name": "Null4", 00:14:52.151 "name": "Null4", 00:14:52.151 "nguid": "9EA0D20251544A69A8EDB83BDD9A2E3A", 00:14:52.151 "uuid": "9ea0d202-5154-4a69-a8ed-b83bdd9a2e3a" 00:14:52.151 } 00:14:52.152 ] 00:14:52.152 } 00:14:52.152 ] 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.152 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.412 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.413 rmmod nvme_tcp 00:14:52.413 rmmod nvme_fabrics 00:14:52.413 rmmod nvme_keyring 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2814355 ']' 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2814355 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2814355 ']' 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2814355 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2814355 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2814355' 00:14:52.413 killing process with pid 2814355 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2814355 00:14:52.413 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2814355 00:14:53.354 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.354 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.354 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.354 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.354 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.354 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.354 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.354 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.902 00:14:55.902 real 0m11.702s 00:14:55.902 user 0m9.219s 00:14:55.902 sys 0m5.629s 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:55.902 ************************************ 00:14:55.902 END TEST nvmf_target_discovery 00:14:55.902 ************************************ 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.902 ************************************ 00:14:55.902 START TEST nvmf_referrals 00:14:55.902 ************************************ 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:55.902 * Looking for test storage... 00:14:55.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.902 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.903 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:02.492 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:02.492 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:02.492 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:02.492 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:02.492 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.493 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:15:02.753 00:15:02.753 --- 10.0.0.2 ping statistics --- 00:15:02.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.753 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:15:02.753 00:15:02.753 --- 10.0.0.1 ping statistics --- 00:15:02.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.753 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2819033 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2819033 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2819033 ']' 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.753 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:03.014 [2024-07-22 19:19:21.776578] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:03.014 [2024-07-22 19:19:21.776688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.014 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.014 [2024-07-22 19:19:21.907547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.274 [2024-07-22 19:19:22.090956] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.274 [2024-07-22 19:19:22.091000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.274 [2024-07-22 19:19:22.091013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.274 [2024-07-22 19:19:22.091022] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.274 [2024-07-22 19:19:22.091032] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.274 [2024-07-22 19:19:22.091228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.274 [2024-07-22 19:19:22.091306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.274 [2024-07-22 19:19:22.091609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.274 [2024-07-22 19:19:22.091633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:03.847 [2024-07-22 19:19:22.572845] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:03.847 [2024-07-22 19:19:22.589036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:03.847 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:04.108 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:04.108 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:04.109 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:04.369 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:04.369 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:04.370 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:04.629 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:04.629 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:04.629 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:04.629 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:04.629 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:04.630 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:04.890 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:05.151 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:05.151 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:05.151 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:05.151 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:05.151 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:05.151 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:05.151 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.412 rmmod nvme_tcp 00:15:05.412 rmmod nvme_fabrics 00:15:05.412 rmmod nvme_keyring 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2819033 ']' 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2819033 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2819033 ']' 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2819033 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2819033 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2819033' 00:15:05.412 killing process with pid 2819033 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2819033 00:15:05.412 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2819033 00:15:06.354 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.354 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.354 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.354 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.354 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.354 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.354 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.354 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:08.900 00:15:08.900 real 0m12.905s 00:15:08.900 user 0m14.093s 00:15:08.900 sys 0m6.099s 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:08.900 ************************************ 00:15:08.900 END TEST nvmf_referrals 00:15:08.900 ************************************ 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:08.900 ************************************ 00:15:08.900 START TEST nvmf_connect_disconnect 00:15:08.900 ************************************ 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:08.900 * Looking for test storage... 00:15:08.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.900 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:15:08.901 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:15.491 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.491 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.491 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:15.492 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:15.492 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:15.492 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:15.492 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.492 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:15:15.754 00:15:15.754 --- 10.0.0.2 ping statistics --- 00:15:15.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.754 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.422 ms 00:15:15.754 00:15:15.754 --- 10.0.0.1 ping statistics --- 00:15:15.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.754 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2823821 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2823821 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2823821 ']' 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.754 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:16.015 [2024-07-22 19:19:34.756478] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:16.015 [2024-07-22 19:19:34.756605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.015 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.015 [2024-07-22 19:19:34.890826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.317 [2024-07-22 19:19:35.078438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.317 [2024-07-22 19:19:35.078481] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.317 [2024-07-22 19:19:35.078493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.317 [2024-07-22 19:19:35.078503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.317 [2024-07-22 19:19:35.078513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.317 [2024-07-22 19:19:35.078694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.317 [2024-07-22 19:19:35.078778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.317 [2024-07-22 19:19:35.078913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.317 [2024-07-22 19:19:35.078939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.587 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.587 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:15:16.587 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.587 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.587 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:16.847 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.847 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:16.847 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.847 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:16.847 [2024-07-22 19:19:35.553852] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.847 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.847 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:16.848 [2024-07-22 19:19:35.650276] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:15:16.848 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:19.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:07.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:35.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:42.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:49.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:55.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:56.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:06.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:10.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:10.946 rmmod nvme_tcp 00:19:10.946 rmmod nvme_fabrics 00:19:10.946 rmmod nvme_keyring 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2823821 ']' 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2823821 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2823821 ']' 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2823821 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2823821 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2823821' 00:19:10.946 killing process with pid 2823821 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2823821 00:19:10.946 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2823821 00:19:11.903 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:11.903 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:11.903 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:11.903 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.903 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:11.903 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.903 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.903 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.503 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:14.503 00:19:14.503 real 4m5.548s 00:19:14.503 user 15m34.351s 00:19:14.503 sys 0m23.215s 00:19:14.503 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:14.503 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:19:14.503 ************************************ 00:19:14.503 END TEST nvmf_connect_disconnect 00:19:14.503 ************************************ 00:19:14.503 19:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:14.503 19:23:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:14.503 19:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:14.503 19:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.503 19:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:14.503 ************************************ 00:19:14.503 START TEST nvmf_multitarget 00:19:14.503 ************************************ 00:19:14.503 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:14.503 * Looking for test storage... 00:19:14.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.503 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.504 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.504 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.504 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.504 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.504 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.504 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.504 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.504 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:21.101 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:21.101 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.101 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:21.102 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:21.102 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:21.102 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.102 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.102 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:21.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:19:21.364 00:19:21.364 --- 10.0.0.2 ping statistics --- 00:19:21.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.364 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:19:21.364 00:19:21.364 --- 10.0.0.1 ping statistics --- 00:19:21.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.364 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2875392 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2875392 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2875392 ']' 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.364 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:21.364 [2024-07-22 19:23:40.212989] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:21.364 [2024-07-22 19:23:40.213112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.364 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.626 [2024-07-22 19:23:40.347097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.626 [2024-07-22 19:23:40.533496] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.626 [2024-07-22 19:23:40.533537] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.626 [2024-07-22 19:23:40.533553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.626 [2024-07-22 19:23:40.533563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.626 [2024-07-22 19:23:40.533574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.626 [2024-07-22 19:23:40.533762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.626 [2024-07-22 19:23:40.533864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.626 [2024-07-22 19:23:40.533978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.626 [2024-07-22 19:23:40.534004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.198 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.198 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:19:22.198 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.198 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.198 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:22.198 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.198 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:22.198 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:22.198 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:19:22.198 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:19:22.198 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:19:22.458 "nvmf_tgt_1" 00:19:22.458 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:19:22.458 "nvmf_tgt_2" 00:19:22.458 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:22.458 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:19:22.458 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:19:22.720 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:19:22.720 true 00:19:22.720 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:19:22.720 true 00:19:22.720 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:22.720 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:22.981 rmmod nvme_tcp 00:19:22.981 rmmod nvme_fabrics 00:19:22.981 rmmod nvme_keyring 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2875392 ']' 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2875392 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2875392 ']' 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2875392 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2875392 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2875392' 00:19:22.981 killing process with pid 2875392 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2875392 00:19:22.981 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2875392 00:19:23.925 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.925 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.925 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.925 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.925 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.925 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.925 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.925 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.839 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.839 00:19:25.839 real 0m11.797s 00:19:25.839 user 0m10.768s 00:19:25.839 sys 0m5.685s 00:19:25.839 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.839 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:25.839 ************************************ 00:19:25.839 END TEST nvmf_multitarget 00:19:25.839 ************************************ 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.101 ************************************ 00:19:26.101 START TEST nvmf_rpc 00:19:26.101 ************************************ 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:19:26.101 * Looking for test storage... 00:19:26.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:19:26.101 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.241 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:34.242 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:34.242 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:34.242 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:34.242 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:34.242 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:34.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:19:34.242 00:19:34.242 --- 10.0.0.2 ping statistics --- 00:19:34.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.242 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:19:34.242 00:19:34.242 --- 10.0.0.1 ping statistics --- 00:19:34.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.242 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2880072 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2880072 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2880072 ']' 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.242 [2024-07-22 19:23:52.175154] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:34.242 [2024-07-22 19:23:52.175256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.242 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.242 [2024-07-22 19:23:52.295670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.242 [2024-07-22 19:23:52.475958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.242 [2024-07-22 19:23:52.476006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.242 [2024-07-22 19:23:52.476018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.242 [2024-07-22 19:23:52.476029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.242 [2024-07-22 19:23:52.476039] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.242 [2024-07-22 19:23:52.476242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.242 [2024-07-22 19:23:52.476314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.242 [2024-07-22 19:23:52.476646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.242 [2024-07-22 19:23:52.476669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.242 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:19:34.242 "tick_rate": 2400000000, 00:19:34.242 "poll_groups": [ 00:19:34.242 { 00:19:34.242 "name": "nvmf_tgt_poll_group_000", 00:19:34.242 "admin_qpairs": 0, 00:19:34.242 "io_qpairs": 0, 00:19:34.242 "current_admin_qpairs": 0, 00:19:34.242 "current_io_qpairs": 0, 00:19:34.243 "pending_bdev_io": 0, 00:19:34.243 "completed_nvme_io": 0, 00:19:34.243 "transports": [] 00:19:34.243 }, 00:19:34.243 { 00:19:34.243 "name": "nvmf_tgt_poll_group_001", 00:19:34.243 "admin_qpairs": 0, 00:19:34.243 "io_qpairs": 0, 00:19:34.243 "current_admin_qpairs": 0, 00:19:34.243 "current_io_qpairs": 0, 00:19:34.243 "pending_bdev_io": 0, 00:19:34.243 "completed_nvme_io": 0, 00:19:34.243 "transports": [] 00:19:34.243 }, 00:19:34.243 { 00:19:34.243 "name": "nvmf_tgt_poll_group_002", 00:19:34.243 "admin_qpairs": 0, 00:19:34.243 "io_qpairs": 0, 00:19:34.243 "current_admin_qpairs": 0, 00:19:34.243 "current_io_qpairs": 0, 00:19:34.243 "pending_bdev_io": 0, 00:19:34.243 "completed_nvme_io": 0, 00:19:34.243 "transports": [] 00:19:34.243 }, 00:19:34.243 { 00:19:34.243 "name": "nvmf_tgt_poll_group_003", 00:19:34.243 "admin_qpairs": 0, 00:19:34.243 "io_qpairs": 0, 00:19:34.243 "current_admin_qpairs": 0, 00:19:34.243 "current_io_qpairs": 0, 00:19:34.243 "pending_bdev_io": 0, 00:19:34.243 "completed_nvme_io": 0, 00:19:34.243 "transports": [] 00:19:34.243 } 00:19:34.243 ] 00:19:34.243 }' 00:19:34.243 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:19:34.243 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:19:34.243 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:19:34.243 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.243 [2024-07-22 19:23:53.085399] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:19:34.243 "tick_rate": 2400000000, 00:19:34.243 "poll_groups": [ 00:19:34.243 { 00:19:34.243 "name": "nvmf_tgt_poll_group_000", 00:19:34.243 "admin_qpairs": 0, 00:19:34.243 "io_qpairs": 0, 00:19:34.243 "current_admin_qpairs": 0, 00:19:34.243 "current_io_qpairs": 0, 00:19:34.243 "pending_bdev_io": 0, 00:19:34.243 "completed_nvme_io": 0, 00:19:34.243 "transports": [ 00:19:34.243 { 00:19:34.243 "trtype": "TCP" 00:19:34.243 } 00:19:34.243 ] 00:19:34.243 }, 00:19:34.243 { 00:19:34.243 "name": "nvmf_tgt_poll_group_001", 00:19:34.243 "admin_qpairs": 0, 00:19:34.243 "io_qpairs": 0, 00:19:34.243 "current_admin_qpairs": 0, 00:19:34.243 "current_io_qpairs": 0, 00:19:34.243 "pending_bdev_io": 0, 00:19:34.243 "completed_nvme_io": 0, 00:19:34.243 "transports": [ 00:19:34.243 { 00:19:34.243 "trtype": "TCP" 00:19:34.243 } 00:19:34.243 ] 00:19:34.243 }, 00:19:34.243 { 00:19:34.243 "name": "nvmf_tgt_poll_group_002", 00:19:34.243 "admin_qpairs": 0, 00:19:34.243 "io_qpairs": 0, 00:19:34.243 "current_admin_qpairs": 0, 00:19:34.243 "current_io_qpairs": 0, 00:19:34.243 "pending_bdev_io": 0, 00:19:34.243 "completed_nvme_io": 0, 00:19:34.243 "transports": [ 00:19:34.243 { 00:19:34.243 "trtype": "TCP" 00:19:34.243 } 00:19:34.243 ] 00:19:34.243 }, 00:19:34.243 { 00:19:34.243 "name": "nvmf_tgt_poll_group_003", 00:19:34.243 "admin_qpairs": 0, 00:19:34.243 "io_qpairs": 0, 00:19:34.243 "current_admin_qpairs": 0, 00:19:34.243 "current_io_qpairs": 0, 00:19:34.243 "pending_bdev_io": 0, 00:19:34.243 "completed_nvme_io": 0, 00:19:34.243 "transports": [ 00:19:34.243 { 00:19:34.243 "trtype": "TCP" 00:19:34.243 } 00:19:34.243 ] 00:19:34.243 } 00:19:34.243 ] 00:19:34.243 }' 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:34.243 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.504 Malloc1 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.504 [2024-07-22 19:23:53.314553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:19:34.504 [2024-07-22 19:23:53.341700] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:19:34.504 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:34.504 could not add new controller: failed to write to nvme-fabrics device 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.504 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.505 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.505 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.505 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:36.418 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:19:36.418 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:36.418 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:36.418 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:36.418 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:38.332 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:38.332 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:38.332 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:38.332 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:38.332 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:38.332 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:38.332 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:38.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.332 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:38.333 [2024-07-22 19:23:57.232239] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:19:38.333 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:38.333 could not add new controller: failed to write to nvme-fabrics device 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.333 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:40.247 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:19:40.247 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:40.247 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:40.247 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:40.247 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:42.171 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:42.171 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:42.171 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:42.171 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:42.171 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:42.172 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:42.172 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:42.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:42.172 [2024-07-22 19:24:01.089713] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.172 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:44.086 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:44.086 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:44.086 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:44.086 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:44.086 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:46.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.000 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.261 [2024-07-22 19:24:04.964961] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.261 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:47.647 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:47.647 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:47.647 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:47.647 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:47.647 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:49.559 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:49.559 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:49.559 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:49.559 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:49.559 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:49.559 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:49.559 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:49.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.821 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:50.082 [2024-07-22 19:24:08.813481] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.082 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:51.470 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:51.470 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:51.470 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:51.470 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:51.470 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:53.385 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:53.385 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:53.385 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:53.645 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:53.645 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:53.645 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:53.645 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:53.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.907 [2024-07-22 19:24:12.706857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.907 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:55.303 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:55.303 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:55.303 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:55.303 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:55.303 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.847 [2024-07-22 19:24:16.557973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.847 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:59.230 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:59.230 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:59.230 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:59.230 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:59.230 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:01.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.773 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 [2024-07-22 19:24:20.445588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 [2024-07-22 19:24:20.509725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 [2024-07-22 19:24:20.573933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 [2024-07-22 19:24:20.634072] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.774 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.774 [2024-07-22 19:24:20.694291] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.775 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.034 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:20:02.034 "tick_rate": 2400000000, 00:20:02.034 "poll_groups": [ 00:20:02.034 { 00:20:02.034 "name": "nvmf_tgt_poll_group_000", 00:20:02.034 "admin_qpairs": 0, 00:20:02.034 "io_qpairs": 224, 00:20:02.034 "current_admin_qpairs": 0, 00:20:02.034 "current_io_qpairs": 0, 00:20:02.034 "pending_bdev_io": 0, 00:20:02.034 "completed_nvme_io": 276, 00:20:02.034 "transports": [ 00:20:02.034 { 00:20:02.034 "trtype": "TCP" 00:20:02.034 } 00:20:02.034 ] 00:20:02.034 }, 00:20:02.034 { 00:20:02.034 "name": "nvmf_tgt_poll_group_001", 00:20:02.034 "admin_qpairs": 1, 00:20:02.034 "io_qpairs": 223, 00:20:02.034 "current_admin_qpairs": 0, 00:20:02.034 "current_io_qpairs": 0, 00:20:02.034 "pending_bdev_io": 0, 00:20:02.034 "completed_nvme_io": 518, 00:20:02.034 "transports": [ 00:20:02.034 { 00:20:02.034 "trtype": "TCP" 00:20:02.034 } 00:20:02.034 ] 00:20:02.034 }, 00:20:02.034 { 00:20:02.034 "name": "nvmf_tgt_poll_group_002", 00:20:02.034 "admin_qpairs": 6, 00:20:02.034 "io_qpairs": 218, 00:20:02.034 "current_admin_qpairs": 0, 00:20:02.034 "current_io_qpairs": 0, 00:20:02.034 "pending_bdev_io": 0, 00:20:02.035 "completed_nvme_io": 221, 00:20:02.035 "transports": [ 00:20:02.035 { 00:20:02.035 "trtype": "TCP" 00:20:02.035 } 00:20:02.035 ] 00:20:02.035 }, 00:20:02.035 { 00:20:02.035 "name": "nvmf_tgt_poll_group_003", 00:20:02.035 "admin_qpairs": 0, 00:20:02.035 "io_qpairs": 224, 00:20:02.035 "current_admin_qpairs": 0, 00:20:02.035 "current_io_qpairs": 0, 00:20:02.035 "pending_bdev_io": 0, 00:20:02.035 "completed_nvme_io": 224, 00:20:02.035 "transports": [ 00:20:02.035 { 00:20:02.035 "trtype": "TCP" 00:20:02.035 } 00:20:02.035 ] 00:20:02.035 } 00:20:02.035 ] 00:20:02.035 }' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.035 rmmod nvme_tcp 00:20:02.035 rmmod nvme_fabrics 00:20:02.035 rmmod nvme_keyring 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2880072 ']' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2880072 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2880072 ']' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2880072 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2880072 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2880072' 00:20:02.035 killing process with pid 2880072 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2880072 00:20:02.035 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2880072 00:20:03.418 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:03.418 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:03.418 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:03.418 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.418 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:03.418 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.418 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.418 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:05.397 00:20:05.397 real 0m39.162s 00:20:05.397 user 1m58.648s 00:20:05.397 sys 0m7.378s 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.397 ************************************ 00:20:05.397 END TEST nvmf_rpc 00:20:05.397 ************************************ 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:05.397 ************************************ 00:20:05.397 START TEST nvmf_invalid 00:20:05.397 ************************************ 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:20:05.397 * Looking for test storage... 00:20:05.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:05.397 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:20:05.398 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.541 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:13.542 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:13.542 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:13.542 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:13.542 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:20:13.542 00:20:13.542 --- 10.0.0.2 ping statistics --- 00:20:13.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.542 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:20:13.542 00:20:13.542 --- 10.0.0.1 ping statistics --- 00:20:13.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.542 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2890490 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2890490 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2890490 ']' 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.542 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:13.542 [2024-07-22 19:24:31.481499] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:13.542 [2024-07-22 19:24:31.481603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.542 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.542 [2024-07-22 19:24:31.602996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.542 [2024-07-22 19:24:31.786712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.542 [2024-07-22 19:24:31.786758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.542 [2024-07-22 19:24:31.786771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.542 [2024-07-22 19:24:31.786781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.542 [2024-07-22 19:24:31.786791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.542 [2024-07-22 19:24:31.786998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.542 [2024-07-22 19:24:31.789229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.543 [2024-07-22 19:24:31.789346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.543 [2024-07-22 19:24:31.789435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7659 00:20:13.543 [2024-07-22 19:24:32.404239] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:20:13.543 { 00:20:13.543 "nqn": "nqn.2016-06.io.spdk:cnode7659", 00:20:13.543 "tgt_name": "foobar", 00:20:13.543 "method": "nvmf_create_subsystem", 00:20:13.543 "req_id": 1 00:20:13.543 } 00:20:13.543 Got JSON-RPC error response 00:20:13.543 response: 00:20:13.543 { 00:20:13.543 "code": -32603, 00:20:13.543 "message": "Unable to find target foobar" 00:20:13.543 }' 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:20:13.543 { 00:20:13.543 "nqn": "nqn.2016-06.io.spdk:cnode7659", 00:20:13.543 "tgt_name": "foobar", 00:20:13.543 "method": "nvmf_create_subsystem", 00:20:13.543 "req_id": 1 00:20:13.543 } 00:20:13.543 Got JSON-RPC error response 00:20:13.543 response: 00:20:13.543 { 00:20:13.543 "code": -32603, 00:20:13.543 "message": "Unable to find target foobar" 00:20:13.543 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:20:13.543 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3190 00:20:13.803 [2024-07-22 19:24:32.580824] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3190: invalid serial number 'SPDKISFASTANDAWESOME' 00:20:13.803 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:20:13.803 { 00:20:13.803 "nqn": "nqn.2016-06.io.spdk:cnode3190", 00:20:13.803 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:13.803 "method": "nvmf_create_subsystem", 00:20:13.803 "req_id": 1 00:20:13.803 } 00:20:13.803 Got JSON-RPC error response 00:20:13.803 response: 00:20:13.803 { 00:20:13.803 "code": -32602, 00:20:13.803 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:13.803 }' 00:20:13.803 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:20:13.803 { 00:20:13.803 "nqn": "nqn.2016-06.io.spdk:cnode3190", 00:20:13.803 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:13.804 "method": "nvmf_create_subsystem", 00:20:13.804 "req_id": 1 00:20:13.804 } 00:20:13.804 Got JSON-RPC error response 00:20:13.804 response: 00:20:13.804 { 00:20:13.804 "code": -32602, 00:20:13.804 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:13.804 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:13.804 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:20:13.804 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11222 00:20:14.065 [2024-07-22 19:24:32.757375] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11222: invalid model number 'SPDK_Controller' 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:20:14.065 { 00:20:14.065 "nqn": "nqn.2016-06.io.spdk:cnode11222", 00:20:14.065 "model_number": "SPDK_Controller\u001f", 00:20:14.065 "method": "nvmf_create_subsystem", 00:20:14.065 "req_id": 1 00:20:14.065 } 00:20:14.065 Got JSON-RPC error response 00:20:14.065 response: 00:20:14.065 { 00:20:14.065 "code": -32602, 00:20:14.065 "message": "Invalid MN SPDK_Controller\u001f" 00:20:14.065 }' 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:20:14.065 { 00:20:14.065 "nqn": "nqn.2016-06.io.spdk:cnode11222", 00:20:14.065 "model_number": "SPDK_Controller\u001f", 00:20:14.065 "method": "nvmf_create_subsystem", 00:20:14.065 "req_id": 1 00:20:14.065 } 00:20:14.065 Got JSON-RPC error response 00:20:14.065 response: 00:20:14.065 { 00:20:14.065 "code": -32602, 00:20:14.065 "message": "Invalid MN SPDK_Controller\u001f" 00:20:14.065 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:20:14.065 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'f,bW@uN*WojB[QuL02`j+' 00:20:14.066 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'f,bW@uN*WojB[QuL02`j+' nqn.2016-06.io.spdk:cnode8774 00:20:14.328 [2024-07-22 19:24:33.090507] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8774: invalid serial number 'f,bW@uN*WojB[QuL02`j+' 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:20:14.328 { 00:20:14.328 "nqn": "nqn.2016-06.io.spdk:cnode8774", 00:20:14.328 "serial_number": "f,bW@uN*WojB[QuL02`j+", 00:20:14.328 "method": "nvmf_create_subsystem", 00:20:14.328 "req_id": 1 00:20:14.328 } 00:20:14.328 Got JSON-RPC error response 00:20:14.328 response: 00:20:14.328 { 00:20:14.328 "code": -32602, 00:20:14.328 "message": "Invalid SN f,bW@uN*WojB[QuL02`j+" 00:20:14.328 }' 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:20:14.328 { 00:20:14.328 "nqn": "nqn.2016-06.io.spdk:cnode8774", 00:20:14.328 "serial_number": "f,bW@uN*WojB[QuL02`j+", 00:20:14.328 "method": "nvmf_create_subsystem", 00:20:14.328 "req_id": 1 00:20:14.328 } 00:20:14.328 Got JSON-RPC error response 00:20:14.328 response: 00:20:14.328 { 00:20:14.328 "code": -32602, 00:20:14.328 "message": "Invalid SN f,bW@uN*WojB[QuL02`j+" 00:20:14.328 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.328 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.329 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:14.590 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '[2nBa.U%3&1bHO/b9Z}_054"9Z20ixs?FHoW_%*;>' 00:20:14.591 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '[2nBa.U%3&1bHO/b9Z}_054"9Z20ixs?FHoW_%*;>' nqn.2016-06.io.spdk:cnode18455 00:20:14.851 [2024-07-22 19:24:33.568091] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18455: invalid model number '[2nBa.U%3&1bHO/b9Z}_054"9Z20ixs?FHoW_%*;>' 00:20:14.851 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:20:14.851 { 00:20:14.851 "nqn": "nqn.2016-06.io.spdk:cnode18455", 00:20:14.852 "model_number": "[2nBa.U%3&1bHO/b9Z}_054\"9Z20ixs?FHoW_%*;>", 00:20:14.852 "method": "nvmf_create_subsystem", 00:20:14.852 "req_id": 1 00:20:14.852 } 00:20:14.852 Got JSON-RPC error response 00:20:14.852 response: 00:20:14.852 { 00:20:14.852 "code": -32602, 00:20:14.852 "message": "Invalid MN [2nBa.U%3&1bHO/b9Z}_054\"9Z20ixs?FHoW_%*;>" 00:20:14.852 }' 00:20:14.852 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:20:14.852 { 00:20:14.852 "nqn": "nqn.2016-06.io.spdk:cnode18455", 00:20:14.852 "model_number": "[2nBa.U%3&1bHO/b9Z}_054\"9Z20ixs?FHoW_%*;>", 00:20:14.852 "method": "nvmf_create_subsystem", 00:20:14.852 "req_id": 1 00:20:14.852 } 00:20:14.852 Got JSON-RPC error response 00:20:14.852 response: 00:20:14.852 { 00:20:14.852 "code": -32602, 00:20:14.852 "message": "Invalid MN [2nBa.U%3&1bHO/b9Z}_054\"9Z20ixs?FHoW_%*;>" 00:20:14.852 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:14.852 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:20:14.852 [2024-07-22 19:24:33.740761] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.852 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:20:15.113 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:20:15.113 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:20:15.113 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:20:15.113 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:20:15.113 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:20:15.374 [2024-07-22 19:24:34.085921] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:20:15.374 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:20:15.374 { 00:20:15.374 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:15.374 "listen_address": { 00:20:15.374 "trtype": "tcp", 00:20:15.374 "traddr": "", 00:20:15.374 "trsvcid": "4421" 00:20:15.374 }, 00:20:15.374 "method": "nvmf_subsystem_remove_listener", 00:20:15.374 "req_id": 1 00:20:15.374 } 00:20:15.374 Got JSON-RPC error response 00:20:15.374 response: 00:20:15.374 { 00:20:15.374 "code": -32602, 00:20:15.374 "message": "Invalid parameters" 00:20:15.374 }' 00:20:15.374 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:20:15.374 { 00:20:15.374 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:15.374 "listen_address": { 00:20:15.374 "trtype": "tcp", 00:20:15.374 "traddr": "", 00:20:15.374 "trsvcid": "4421" 00:20:15.374 }, 00:20:15.374 "method": "nvmf_subsystem_remove_listener", 00:20:15.374 "req_id": 1 00:20:15.374 } 00:20:15.374 Got JSON-RPC error response 00:20:15.374 response: 00:20:15.374 { 00:20:15.374 "code": -32602, 00:20:15.374 "message": "Invalid parameters" 00:20:15.374 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:20:15.374 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4538 -i 0 00:20:15.374 [2024-07-22 19:24:34.262503] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4538: invalid cntlid range [0-65519] 00:20:15.374 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:20:15.374 { 00:20:15.374 "nqn": "nqn.2016-06.io.spdk:cnode4538", 00:20:15.374 "min_cntlid": 0, 00:20:15.374 "method": "nvmf_create_subsystem", 00:20:15.374 "req_id": 1 00:20:15.374 } 00:20:15.374 Got JSON-RPC error response 00:20:15.374 response: 00:20:15.374 { 00:20:15.374 "code": -32602, 00:20:15.374 "message": "Invalid cntlid range [0-65519]" 00:20:15.374 }' 00:20:15.374 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:20:15.374 { 00:20:15.374 "nqn": "nqn.2016-06.io.spdk:cnode4538", 00:20:15.374 "min_cntlid": 0, 00:20:15.374 "method": "nvmf_create_subsystem", 00:20:15.374 "req_id": 1 00:20:15.374 } 00:20:15.374 Got JSON-RPC error response 00:20:15.374 response: 00:20:15.374 { 00:20:15.374 "code": -32602, 00:20:15.374 "message": "Invalid cntlid range [0-65519]" 00:20:15.374 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:15.374 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26890 -i 65520 00:20:15.635 [2024-07-22 19:24:34.439096] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26890: invalid cntlid range [65520-65519] 00:20:15.635 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:20:15.635 { 00:20:15.635 "nqn": "nqn.2016-06.io.spdk:cnode26890", 00:20:15.635 "min_cntlid": 65520, 00:20:15.635 "method": "nvmf_create_subsystem", 00:20:15.635 "req_id": 1 00:20:15.635 } 00:20:15.635 Got JSON-RPC error response 00:20:15.635 response: 00:20:15.635 { 00:20:15.635 "code": -32602, 00:20:15.635 "message": "Invalid cntlid range [65520-65519]" 00:20:15.635 }' 00:20:15.635 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:20:15.635 { 00:20:15.635 "nqn": "nqn.2016-06.io.spdk:cnode26890", 00:20:15.635 "min_cntlid": 65520, 00:20:15.635 "method": "nvmf_create_subsystem", 00:20:15.635 "req_id": 1 00:20:15.635 } 00:20:15.635 Got JSON-RPC error response 00:20:15.635 response: 00:20:15.635 { 00:20:15.635 "code": -32602, 00:20:15.635 "message": "Invalid cntlid range [65520-65519]" 00:20:15.635 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:15.635 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16580 -I 0 00:20:15.896 [2024-07-22 19:24:34.603630] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16580: invalid cntlid range [1-0] 00:20:15.896 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:20:15.896 { 00:20:15.896 "nqn": "nqn.2016-06.io.spdk:cnode16580", 00:20:15.896 "max_cntlid": 0, 00:20:15.896 "method": "nvmf_create_subsystem", 00:20:15.896 "req_id": 1 00:20:15.896 } 00:20:15.896 Got JSON-RPC error response 00:20:15.896 response: 00:20:15.896 { 00:20:15.896 "code": -32602, 00:20:15.896 "message": "Invalid cntlid range [1-0]" 00:20:15.896 }' 00:20:15.896 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:20:15.896 { 00:20:15.896 "nqn": "nqn.2016-06.io.spdk:cnode16580", 00:20:15.896 "max_cntlid": 0, 00:20:15.896 "method": "nvmf_create_subsystem", 00:20:15.896 "req_id": 1 00:20:15.896 } 00:20:15.896 Got JSON-RPC error response 00:20:15.896 response: 00:20:15.896 { 00:20:15.896 "code": -32602, 00:20:15.896 "message": "Invalid cntlid range [1-0]" 00:20:15.896 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:15.896 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17239 -I 65520 00:20:15.896 [2024-07-22 19:24:34.776256] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17239: invalid cntlid range [1-65520] 00:20:15.896 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:20:15.896 { 00:20:15.896 "nqn": "nqn.2016-06.io.spdk:cnode17239", 00:20:15.897 "max_cntlid": 65520, 00:20:15.897 "method": "nvmf_create_subsystem", 00:20:15.897 "req_id": 1 00:20:15.897 } 00:20:15.897 Got JSON-RPC error response 00:20:15.897 response: 00:20:15.897 { 00:20:15.897 "code": -32602, 00:20:15.897 "message": "Invalid cntlid range [1-65520]" 00:20:15.897 }' 00:20:15.897 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:20:15.897 { 00:20:15.897 "nqn": "nqn.2016-06.io.spdk:cnode17239", 00:20:15.897 "max_cntlid": 65520, 00:20:15.897 "method": "nvmf_create_subsystem", 00:20:15.897 "req_id": 1 00:20:15.897 } 00:20:15.897 Got JSON-RPC error response 00:20:15.897 response: 00:20:15.897 { 00:20:15.897 "code": -32602, 00:20:15.897 "message": "Invalid cntlid range [1-65520]" 00:20:15.897 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:15.897 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28919 -i 6 -I 5 00:20:16.157 [2024-07-22 19:24:34.944797] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28919: invalid cntlid range [6-5] 00:20:16.157 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:20:16.157 { 00:20:16.157 "nqn": "nqn.2016-06.io.spdk:cnode28919", 00:20:16.157 "min_cntlid": 6, 00:20:16.157 "max_cntlid": 5, 00:20:16.157 "method": "nvmf_create_subsystem", 00:20:16.157 "req_id": 1 00:20:16.157 } 00:20:16.157 Got JSON-RPC error response 00:20:16.157 response: 00:20:16.157 { 00:20:16.157 "code": -32602, 00:20:16.157 "message": "Invalid cntlid range [6-5]" 00:20:16.157 }' 00:20:16.157 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:20:16.157 { 00:20:16.157 "nqn": "nqn.2016-06.io.spdk:cnode28919", 00:20:16.157 "min_cntlid": 6, 00:20:16.157 "max_cntlid": 5, 00:20:16.157 "method": "nvmf_create_subsystem", 00:20:16.157 "req_id": 1 00:20:16.157 } 00:20:16.157 Got JSON-RPC error response 00:20:16.157 response: 00:20:16.157 { 00:20:16.157 "code": -32602, 00:20:16.157 "message": "Invalid cntlid range [6-5]" 00:20:16.157 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:16.157 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:20:16.157 { 00:20:16.157 "name": "foobar", 00:20:16.157 "method": "nvmf_delete_target", 00:20:16.157 "req_id": 1 00:20:16.157 } 00:20:16.157 Got JSON-RPC error response 00:20:16.157 response: 00:20:16.157 { 00:20:16.157 "code": -32602, 00:20:16.157 "message": "The specified target doesn'\''t exist, cannot delete it." 00:20:16.157 }' 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:20:16.157 { 00:20:16.157 "name": "foobar", 00:20:16.157 "method": "nvmf_delete_target", 00:20:16.157 "req_id": 1 00:20:16.157 } 00:20:16.157 Got JSON-RPC error response 00:20:16.157 response: 00:20:16.157 { 00:20:16.157 "code": -32602, 00:20:16.157 "message": "The specified target doesn't exist, cannot delete it." 00:20:16.157 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:16.157 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:16.157 rmmod nvme_tcp 00:20:16.157 rmmod nvme_fabrics 00:20:16.417 rmmod nvme_keyring 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2890490 ']' 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2890490 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2890490 ']' 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2890490 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2890490 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2890490' 00:20:16.417 killing process with pid 2890490 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2890490 00:20:16.417 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2890490 00:20:17.357 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.357 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.357 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.357 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.357 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.357 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.357 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.357 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.268 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.268 00:20:19.268 real 0m14.065s 00:20:19.268 user 0m20.652s 00:20:19.268 sys 0m6.293s 00:20:19.268 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.268 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:19.268 ************************************ 00:20:19.268 END TEST nvmf_invalid 00:20:19.268 ************************************ 00:20:19.268 19:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:19.268 19:24:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:19.268 19:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:19.268 19:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.268 19:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:19.529 ************************************ 00:20:19.529 START TEST nvmf_connect_stress 00:20:19.529 ************************************ 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:19.529 * Looking for test storage... 00:20:19.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.529 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.530 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:26.115 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:26.115 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.115 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:26.116 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:26.116 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.116 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:20:26.377 00:20:26.377 --- 10.0.0.2 ping statistics --- 00:20:26.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.377 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:20:26.377 00:20:26.377 --- 10.0.0.1 ping statistics --- 00:20:26.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.377 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.377 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2895650 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2895650 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2895650 ']' 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.378 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.639 [2024-07-22 19:24:45.395578] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:26.639 [2024-07-22 19:24:45.395685] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.639 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.639 [2024-07-22 19:24:45.532520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:26.900 [2024-07-22 19:24:45.743585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.900 [2024-07-22 19:24:45.743647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.900 [2024-07-22 19:24:45.743662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.900 [2024-07-22 19:24:45.743674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.900 [2024-07-22 19:24:45.743686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.900 [2024-07-22 19:24:45.743863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.900 [2024-07-22 19:24:45.743983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.900 [2024-07-22 19:24:45.744016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 [2024-07-22 19:24:46.183758] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 [2024-07-22 19:24:46.221839] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 NULL1 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2895904 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.472 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.473 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.733 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.733 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:27.733 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.733 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.733 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:28.305 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.305 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:28.305 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.305 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.305 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:28.565 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.565 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:28.565 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.565 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.565 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:28.826 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.826 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:28.826 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.826 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.826 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:29.087 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.087 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:29.087 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.087 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.087 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:29.347 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.347 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:29.347 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.347 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.347 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:29.919 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.919 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:29.919 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.919 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.919 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:30.180 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.181 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:30.181 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:30.181 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.181 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:30.442 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.442 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:30.442 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:30.442 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.442 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:30.702 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.702 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:30.702 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:30.702 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.702 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:30.969 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.969 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:30.969 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:30.969 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.969 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:31.297 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.297 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:31.297 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:31.297 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.297 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:31.868 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.868 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:31.868 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:31.868 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.868 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:32.129 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.129 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:32.129 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:32.129 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.129 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:32.390 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.390 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:32.390 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:32.390 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.390 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:32.650 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.651 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:32.651 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:32.651 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.651 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:33.222 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.222 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:33.222 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:33.222 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.222 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:33.482 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.482 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:33.482 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:33.482 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.482 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:33.742 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.742 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:33.742 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:33.742 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.742 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:34.003 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.003 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:34.003 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:34.003 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.003 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:34.263 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.264 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:34.264 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:34.264 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.264 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:34.855 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.855 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:34.855 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:34.855 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.855 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:35.116 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.116 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:35.116 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:35.116 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.116 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:35.377 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.377 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:35.377 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:35.377 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.377 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:35.638 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.638 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:35.638 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:35.638 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.638 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.898 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:35.898 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:35.898 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.898 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:36.468 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.468 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:36.468 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:36.468 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.468 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:36.727 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.727 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:36.727 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:36.727 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.727 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:36.987 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.987 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:36.987 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:36.987 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.987 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:37.247 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.247 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:37.247 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:37.247 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.247 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:37.508 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.508 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:37.508 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:37.508 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.508 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:37.769 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2895904 00:20:38.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2895904) - No such process 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2895904 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.030 rmmod nvme_tcp 00:20:38.030 rmmod nvme_fabrics 00:20:38.030 rmmod nvme_keyring 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2895650 ']' 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2895650 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2895650 ']' 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2895650 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2895650 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2895650' 00:20:38.030 killing process with pid 2895650 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2895650 00:20:38.030 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2895650 00:20:38.601 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:38.601 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:38.602 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:38.602 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:38.602 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:38.602 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.602 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.602 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.144 00:20:41.144 real 0m21.395s 00:20:41.144 user 0m44.078s 00:20:41.144 sys 0m8.337s 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:41.144 ************************************ 00:20:41.144 END TEST nvmf_connect_stress 00:20:41.144 ************************************ 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.144 ************************************ 00:20:41.144 START TEST nvmf_fused_ordering 00:20:41.144 ************************************ 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:41.144 * Looking for test storage... 00:20:41.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.144 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.145 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:47.735 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:47.735 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:47.735 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:47.735 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.735 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.736 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:47.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:20:47.997 00:20:47.997 --- 10.0.0.2 ping statistics --- 00:20:47.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.997 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:20:47.997 00:20:47.997 --- 10.0.0.1 ping statistics --- 00:20:47.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.997 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:47.997 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2902114 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2902114 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2902114 ']' 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.258 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:48.258 [2024-07-22 19:25:07.083026] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:48.258 [2024-07-22 19:25:07.083149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.258 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.520 [2024-07-22 19:25:07.235108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.520 [2024-07-22 19:25:07.432172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.520 [2024-07-22 19:25:07.432237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.520 [2024-07-22 19:25:07.432252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.520 [2024-07-22 19:25:07.432261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.520 [2024-07-22 19:25:07.432273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.520 [2024-07-22 19:25:07.432306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 [2024-07-22 19:25:07.879154] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 [2024-07-22 19:25:07.895421] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 NULL1 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.092 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:49.092 [2024-07-22 19:25:07.975540] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:49.092 [2024-07-22 19:25:07.975624] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902388 ] 00:20:49.092 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.665 Attached to nqn.2016-06.io.spdk:cnode1 00:20:49.665 Namespace ID: 1 size: 1GB 00:20:49.665 fused_ordering(0) 00:20:49.665 fused_ordering(1) 00:20:49.665 fused_ordering(2) 00:20:49.665 fused_ordering(3) 00:20:49.665 fused_ordering(4) 00:20:49.665 fused_ordering(5) 00:20:49.665 fused_ordering(6) 00:20:49.665 fused_ordering(7) 00:20:49.665 fused_ordering(8) 00:20:49.665 fused_ordering(9) 00:20:49.665 fused_ordering(10) 00:20:49.665 fused_ordering(11) 00:20:49.665 fused_ordering(12) 00:20:49.665 fused_ordering(13) 00:20:49.665 fused_ordering(14) 00:20:49.665 fused_ordering(15) 00:20:49.665 fused_ordering(16) 00:20:49.665 fused_ordering(17) 00:20:49.665 fused_ordering(18) 00:20:49.665 fused_ordering(19) 00:20:49.665 fused_ordering(20) 00:20:49.665 fused_ordering(21) 00:20:49.665 fused_ordering(22) 00:20:49.665 fused_ordering(23) 00:20:49.665 fused_ordering(24) 00:20:49.665 fused_ordering(25) 00:20:49.665 fused_ordering(26) 00:20:49.665 fused_ordering(27) 00:20:49.665 fused_ordering(28) 00:20:49.665 fused_ordering(29) 00:20:49.665 fused_ordering(30) 00:20:49.665 fused_ordering(31) 00:20:49.665 fused_ordering(32) 00:20:49.665 fused_ordering(33) 00:20:49.665 fused_ordering(34) 00:20:49.665 fused_ordering(35) 00:20:49.665 fused_ordering(36) 00:20:49.665 fused_ordering(37) 00:20:49.665 fused_ordering(38) 00:20:49.665 fused_ordering(39) 00:20:49.665 fused_ordering(40) 00:20:49.665 fused_ordering(41) 00:20:49.665 fused_ordering(42) 00:20:49.665 fused_ordering(43) 00:20:49.665 fused_ordering(44) 00:20:49.665 fused_ordering(45) 00:20:49.665 fused_ordering(46) 00:20:49.665 fused_ordering(47) 00:20:49.665 fused_ordering(48) 00:20:49.665 fused_ordering(49) 00:20:49.665 fused_ordering(50) 00:20:49.665 fused_ordering(51) 00:20:49.665 fused_ordering(52) 00:20:49.665 fused_ordering(53) 00:20:49.665 fused_ordering(54) 00:20:49.665 fused_ordering(55) 00:20:49.665 fused_ordering(56) 00:20:49.665 fused_ordering(57) 00:20:49.665 fused_ordering(58) 00:20:49.665 fused_ordering(59) 00:20:49.665 fused_ordering(60) 00:20:49.665 fused_ordering(61) 00:20:49.665 fused_ordering(62) 00:20:49.665 fused_ordering(63) 00:20:49.665 fused_ordering(64) 00:20:49.665 fused_ordering(65) 00:20:49.665 fused_ordering(66) 00:20:49.665 fused_ordering(67) 00:20:49.665 fused_ordering(68) 00:20:49.665 fused_ordering(69) 00:20:49.665 fused_ordering(70) 00:20:49.665 fused_ordering(71) 00:20:49.665 fused_ordering(72) 00:20:49.665 fused_ordering(73) 00:20:49.665 fused_ordering(74) 00:20:49.665 fused_ordering(75) 00:20:49.665 fused_ordering(76) 00:20:49.665 fused_ordering(77) 00:20:49.665 fused_ordering(78) 00:20:49.665 fused_ordering(79) 00:20:49.665 fused_ordering(80) 00:20:49.665 fused_ordering(81) 00:20:49.665 fused_ordering(82) 00:20:49.665 fused_ordering(83) 00:20:49.665 fused_ordering(84) 00:20:49.665 fused_ordering(85) 00:20:49.665 fused_ordering(86) 00:20:49.665 fused_ordering(87) 00:20:49.665 fused_ordering(88) 00:20:49.665 fused_ordering(89) 00:20:49.665 fused_ordering(90) 00:20:49.665 fused_ordering(91) 00:20:49.665 fused_ordering(92) 00:20:49.665 fused_ordering(93) 00:20:49.665 fused_ordering(94) 00:20:49.665 fused_ordering(95) 00:20:49.665 fused_ordering(96) 00:20:49.665 fused_ordering(97) 00:20:49.665 fused_ordering(98) 00:20:49.665 fused_ordering(99) 00:20:49.665 fused_ordering(100) 00:20:49.665 fused_ordering(101) 00:20:49.665 fused_ordering(102) 00:20:49.665 fused_ordering(103) 00:20:49.665 fused_ordering(104) 00:20:49.665 fused_ordering(105) 00:20:49.665 fused_ordering(106) 00:20:49.665 fused_ordering(107) 00:20:49.665 fused_ordering(108) 00:20:49.665 fused_ordering(109) 00:20:49.665 fused_ordering(110) 00:20:49.665 fused_ordering(111) 00:20:49.665 fused_ordering(112) 00:20:49.665 fused_ordering(113) 00:20:49.665 fused_ordering(114) 00:20:49.665 fused_ordering(115) 00:20:49.665 fused_ordering(116) 00:20:49.665 fused_ordering(117) 00:20:49.665 fused_ordering(118) 00:20:49.665 fused_ordering(119) 00:20:49.665 fused_ordering(120) 00:20:49.665 fused_ordering(121) 00:20:49.665 fused_ordering(122) 00:20:49.665 fused_ordering(123) 00:20:49.665 fused_ordering(124) 00:20:49.665 fused_ordering(125) 00:20:49.665 fused_ordering(126) 00:20:49.665 fused_ordering(127) 00:20:49.665 fused_ordering(128) 00:20:49.665 fused_ordering(129) 00:20:49.665 fused_ordering(130) 00:20:49.665 fused_ordering(131) 00:20:49.665 fused_ordering(132) 00:20:49.665 fused_ordering(133) 00:20:49.665 fused_ordering(134) 00:20:49.665 fused_ordering(135) 00:20:49.665 fused_ordering(136) 00:20:49.665 fused_ordering(137) 00:20:49.665 fused_ordering(138) 00:20:49.665 fused_ordering(139) 00:20:49.665 fused_ordering(140) 00:20:49.665 fused_ordering(141) 00:20:49.665 fused_ordering(142) 00:20:49.665 fused_ordering(143) 00:20:49.665 fused_ordering(144) 00:20:49.665 fused_ordering(145) 00:20:49.665 fused_ordering(146) 00:20:49.665 fused_ordering(147) 00:20:49.665 fused_ordering(148) 00:20:49.665 fused_ordering(149) 00:20:49.665 fused_ordering(150) 00:20:49.665 fused_ordering(151) 00:20:49.665 fused_ordering(152) 00:20:49.665 fused_ordering(153) 00:20:49.665 fused_ordering(154) 00:20:49.665 fused_ordering(155) 00:20:49.665 fused_ordering(156) 00:20:49.665 fused_ordering(157) 00:20:49.665 fused_ordering(158) 00:20:49.665 fused_ordering(159) 00:20:49.665 fused_ordering(160) 00:20:49.665 fused_ordering(161) 00:20:49.665 fused_ordering(162) 00:20:49.665 fused_ordering(163) 00:20:49.665 fused_ordering(164) 00:20:49.665 fused_ordering(165) 00:20:49.665 fused_ordering(166) 00:20:49.665 fused_ordering(167) 00:20:49.665 fused_ordering(168) 00:20:49.665 fused_ordering(169) 00:20:49.665 fused_ordering(170) 00:20:49.665 fused_ordering(171) 00:20:49.665 fused_ordering(172) 00:20:49.665 fused_ordering(173) 00:20:49.665 fused_ordering(174) 00:20:49.665 fused_ordering(175) 00:20:49.665 fused_ordering(176) 00:20:49.665 fused_ordering(177) 00:20:49.665 fused_ordering(178) 00:20:49.665 fused_ordering(179) 00:20:49.665 fused_ordering(180) 00:20:49.665 fused_ordering(181) 00:20:49.665 fused_ordering(182) 00:20:49.665 fused_ordering(183) 00:20:49.666 fused_ordering(184) 00:20:49.666 fused_ordering(185) 00:20:49.666 fused_ordering(186) 00:20:49.666 fused_ordering(187) 00:20:49.666 fused_ordering(188) 00:20:49.666 fused_ordering(189) 00:20:49.666 fused_ordering(190) 00:20:49.666 fused_ordering(191) 00:20:49.666 fused_ordering(192) 00:20:49.666 fused_ordering(193) 00:20:49.666 fused_ordering(194) 00:20:49.666 fused_ordering(195) 00:20:49.666 fused_ordering(196) 00:20:49.666 fused_ordering(197) 00:20:49.666 fused_ordering(198) 00:20:49.666 fused_ordering(199) 00:20:49.666 fused_ordering(200) 00:20:49.666 fused_ordering(201) 00:20:49.666 fused_ordering(202) 00:20:49.666 fused_ordering(203) 00:20:49.666 fused_ordering(204) 00:20:49.666 fused_ordering(205) 00:20:50.237 fused_ordering(206) 00:20:50.237 fused_ordering(207) 00:20:50.237 fused_ordering(208) 00:20:50.237 fused_ordering(209) 00:20:50.237 fused_ordering(210) 00:20:50.237 fused_ordering(211) 00:20:50.237 fused_ordering(212) 00:20:50.237 fused_ordering(213) 00:20:50.237 fused_ordering(214) 00:20:50.237 fused_ordering(215) 00:20:50.237 fused_ordering(216) 00:20:50.237 fused_ordering(217) 00:20:50.237 fused_ordering(218) 00:20:50.237 fused_ordering(219) 00:20:50.237 fused_ordering(220) 00:20:50.237 fused_ordering(221) 00:20:50.237 fused_ordering(222) 00:20:50.237 fused_ordering(223) 00:20:50.237 fused_ordering(224) 00:20:50.237 fused_ordering(225) 00:20:50.237 fused_ordering(226) 00:20:50.237 fused_ordering(227) 00:20:50.237 fused_ordering(228) 00:20:50.237 fused_ordering(229) 00:20:50.237 fused_ordering(230) 00:20:50.237 fused_ordering(231) 00:20:50.237 fused_ordering(232) 00:20:50.237 fused_ordering(233) 00:20:50.237 fused_ordering(234) 00:20:50.237 fused_ordering(235) 00:20:50.237 fused_ordering(236) 00:20:50.237 fused_ordering(237) 00:20:50.237 fused_ordering(238) 00:20:50.237 fused_ordering(239) 00:20:50.237 fused_ordering(240) 00:20:50.237 fused_ordering(241) 00:20:50.237 fused_ordering(242) 00:20:50.237 fused_ordering(243) 00:20:50.237 fused_ordering(244) 00:20:50.237 fused_ordering(245) 00:20:50.237 fused_ordering(246) 00:20:50.237 fused_ordering(247) 00:20:50.237 fused_ordering(248) 00:20:50.237 fused_ordering(249) 00:20:50.237 fused_ordering(250) 00:20:50.238 fused_ordering(251) 00:20:50.238 fused_ordering(252) 00:20:50.238 fused_ordering(253) 00:20:50.238 fused_ordering(254) 00:20:50.238 fused_ordering(255) 00:20:50.238 fused_ordering(256) 00:20:50.238 fused_ordering(257) 00:20:50.238 fused_ordering(258) 00:20:50.238 fused_ordering(259) 00:20:50.238 fused_ordering(260) 00:20:50.238 fused_ordering(261) 00:20:50.238 fused_ordering(262) 00:20:50.238 fused_ordering(263) 00:20:50.238 fused_ordering(264) 00:20:50.238 fused_ordering(265) 00:20:50.238 fused_ordering(266) 00:20:50.238 fused_ordering(267) 00:20:50.238 fused_ordering(268) 00:20:50.238 fused_ordering(269) 00:20:50.238 fused_ordering(270) 00:20:50.238 fused_ordering(271) 00:20:50.238 fused_ordering(272) 00:20:50.238 fused_ordering(273) 00:20:50.238 fused_ordering(274) 00:20:50.238 fused_ordering(275) 00:20:50.238 fused_ordering(276) 00:20:50.238 fused_ordering(277) 00:20:50.238 fused_ordering(278) 00:20:50.238 fused_ordering(279) 00:20:50.238 fused_ordering(280) 00:20:50.238 fused_ordering(281) 00:20:50.238 fused_ordering(282) 00:20:50.238 fused_ordering(283) 00:20:50.238 fused_ordering(284) 00:20:50.238 fused_ordering(285) 00:20:50.238 fused_ordering(286) 00:20:50.238 fused_ordering(287) 00:20:50.238 fused_ordering(288) 00:20:50.238 fused_ordering(289) 00:20:50.238 fused_ordering(290) 00:20:50.238 fused_ordering(291) 00:20:50.238 fused_ordering(292) 00:20:50.238 fused_ordering(293) 00:20:50.238 fused_ordering(294) 00:20:50.238 fused_ordering(295) 00:20:50.238 fused_ordering(296) 00:20:50.238 fused_ordering(297) 00:20:50.238 fused_ordering(298) 00:20:50.238 fused_ordering(299) 00:20:50.238 fused_ordering(300) 00:20:50.238 fused_ordering(301) 00:20:50.238 fused_ordering(302) 00:20:50.238 fused_ordering(303) 00:20:50.238 fused_ordering(304) 00:20:50.238 fused_ordering(305) 00:20:50.238 fused_ordering(306) 00:20:50.238 fused_ordering(307) 00:20:50.238 fused_ordering(308) 00:20:50.238 fused_ordering(309) 00:20:50.238 fused_ordering(310) 00:20:50.238 fused_ordering(311) 00:20:50.238 fused_ordering(312) 00:20:50.238 fused_ordering(313) 00:20:50.238 fused_ordering(314) 00:20:50.238 fused_ordering(315) 00:20:50.238 fused_ordering(316) 00:20:50.238 fused_ordering(317) 00:20:50.238 fused_ordering(318) 00:20:50.238 fused_ordering(319) 00:20:50.238 fused_ordering(320) 00:20:50.238 fused_ordering(321) 00:20:50.238 fused_ordering(322) 00:20:50.238 fused_ordering(323) 00:20:50.238 fused_ordering(324) 00:20:50.238 fused_ordering(325) 00:20:50.238 fused_ordering(326) 00:20:50.238 fused_ordering(327) 00:20:50.238 fused_ordering(328) 00:20:50.238 fused_ordering(329) 00:20:50.238 fused_ordering(330) 00:20:50.238 fused_ordering(331) 00:20:50.238 fused_ordering(332) 00:20:50.238 fused_ordering(333) 00:20:50.238 fused_ordering(334) 00:20:50.238 fused_ordering(335) 00:20:50.238 fused_ordering(336) 00:20:50.238 fused_ordering(337) 00:20:50.238 fused_ordering(338) 00:20:50.238 fused_ordering(339) 00:20:50.238 fused_ordering(340) 00:20:50.238 fused_ordering(341) 00:20:50.238 fused_ordering(342) 00:20:50.238 fused_ordering(343) 00:20:50.238 fused_ordering(344) 00:20:50.238 fused_ordering(345) 00:20:50.238 fused_ordering(346) 00:20:50.238 fused_ordering(347) 00:20:50.238 fused_ordering(348) 00:20:50.238 fused_ordering(349) 00:20:50.238 fused_ordering(350) 00:20:50.238 fused_ordering(351) 00:20:50.238 fused_ordering(352) 00:20:50.238 fused_ordering(353) 00:20:50.238 fused_ordering(354) 00:20:50.238 fused_ordering(355) 00:20:50.238 fused_ordering(356) 00:20:50.238 fused_ordering(357) 00:20:50.238 fused_ordering(358) 00:20:50.238 fused_ordering(359) 00:20:50.238 fused_ordering(360) 00:20:50.238 fused_ordering(361) 00:20:50.238 fused_ordering(362) 00:20:50.238 fused_ordering(363) 00:20:50.238 fused_ordering(364) 00:20:50.238 fused_ordering(365) 00:20:50.238 fused_ordering(366) 00:20:50.238 fused_ordering(367) 00:20:50.238 fused_ordering(368) 00:20:50.238 fused_ordering(369) 00:20:50.238 fused_ordering(370) 00:20:50.238 fused_ordering(371) 00:20:50.238 fused_ordering(372) 00:20:50.238 fused_ordering(373) 00:20:50.238 fused_ordering(374) 00:20:50.238 fused_ordering(375) 00:20:50.238 fused_ordering(376) 00:20:50.238 fused_ordering(377) 00:20:50.238 fused_ordering(378) 00:20:50.238 fused_ordering(379) 00:20:50.238 fused_ordering(380) 00:20:50.238 fused_ordering(381) 00:20:50.238 fused_ordering(382) 00:20:50.238 fused_ordering(383) 00:20:50.238 fused_ordering(384) 00:20:50.238 fused_ordering(385) 00:20:50.238 fused_ordering(386) 00:20:50.238 fused_ordering(387) 00:20:50.238 fused_ordering(388) 00:20:50.238 fused_ordering(389) 00:20:50.238 fused_ordering(390) 00:20:50.238 fused_ordering(391) 00:20:50.238 fused_ordering(392) 00:20:50.238 fused_ordering(393) 00:20:50.238 fused_ordering(394) 00:20:50.238 fused_ordering(395) 00:20:50.238 fused_ordering(396) 00:20:50.238 fused_ordering(397) 00:20:50.238 fused_ordering(398) 00:20:50.238 fused_ordering(399) 00:20:50.238 fused_ordering(400) 00:20:50.238 fused_ordering(401) 00:20:50.238 fused_ordering(402) 00:20:50.238 fused_ordering(403) 00:20:50.238 fused_ordering(404) 00:20:50.238 fused_ordering(405) 00:20:50.238 fused_ordering(406) 00:20:50.238 fused_ordering(407) 00:20:50.238 fused_ordering(408) 00:20:50.238 fused_ordering(409) 00:20:50.238 fused_ordering(410) 00:20:50.499 fused_ordering(411) 00:20:50.499 fused_ordering(412) 00:20:50.499 fused_ordering(413) 00:20:50.499 fused_ordering(414) 00:20:50.499 fused_ordering(415) 00:20:50.499 fused_ordering(416) 00:20:50.499 fused_ordering(417) 00:20:50.499 fused_ordering(418) 00:20:50.499 fused_ordering(419) 00:20:50.499 fused_ordering(420) 00:20:50.499 fused_ordering(421) 00:20:50.499 fused_ordering(422) 00:20:50.499 fused_ordering(423) 00:20:50.499 fused_ordering(424) 00:20:50.499 fused_ordering(425) 00:20:50.499 fused_ordering(426) 00:20:50.499 fused_ordering(427) 00:20:50.499 fused_ordering(428) 00:20:50.499 fused_ordering(429) 00:20:50.499 fused_ordering(430) 00:20:50.499 fused_ordering(431) 00:20:50.499 fused_ordering(432) 00:20:50.499 fused_ordering(433) 00:20:50.499 fused_ordering(434) 00:20:50.499 fused_ordering(435) 00:20:50.499 fused_ordering(436) 00:20:50.499 fused_ordering(437) 00:20:50.499 fused_ordering(438) 00:20:50.499 fused_ordering(439) 00:20:50.499 fused_ordering(440) 00:20:50.499 fused_ordering(441) 00:20:50.499 fused_ordering(442) 00:20:50.499 fused_ordering(443) 00:20:50.499 fused_ordering(444) 00:20:50.499 fused_ordering(445) 00:20:50.499 fused_ordering(446) 00:20:50.499 fused_ordering(447) 00:20:50.499 fused_ordering(448) 00:20:50.499 fused_ordering(449) 00:20:50.499 fused_ordering(450) 00:20:50.499 fused_ordering(451) 00:20:50.499 fused_ordering(452) 00:20:50.499 fused_ordering(453) 00:20:50.499 fused_ordering(454) 00:20:50.499 fused_ordering(455) 00:20:50.499 fused_ordering(456) 00:20:50.499 fused_ordering(457) 00:20:50.499 fused_ordering(458) 00:20:50.499 fused_ordering(459) 00:20:50.499 fused_ordering(460) 00:20:50.499 fused_ordering(461) 00:20:50.499 fused_ordering(462) 00:20:50.499 fused_ordering(463) 00:20:50.499 fused_ordering(464) 00:20:50.499 fused_ordering(465) 00:20:50.499 fused_ordering(466) 00:20:50.499 fused_ordering(467) 00:20:50.499 fused_ordering(468) 00:20:50.499 fused_ordering(469) 00:20:50.499 fused_ordering(470) 00:20:50.499 fused_ordering(471) 00:20:50.499 fused_ordering(472) 00:20:50.499 fused_ordering(473) 00:20:50.499 fused_ordering(474) 00:20:50.499 fused_ordering(475) 00:20:50.499 fused_ordering(476) 00:20:50.499 fused_ordering(477) 00:20:50.499 fused_ordering(478) 00:20:50.499 fused_ordering(479) 00:20:50.499 fused_ordering(480) 00:20:50.499 fused_ordering(481) 00:20:50.499 fused_ordering(482) 00:20:50.499 fused_ordering(483) 00:20:50.499 fused_ordering(484) 00:20:50.499 fused_ordering(485) 00:20:50.499 fused_ordering(486) 00:20:50.499 fused_ordering(487) 00:20:50.499 fused_ordering(488) 00:20:50.499 fused_ordering(489) 00:20:50.499 fused_ordering(490) 00:20:50.499 fused_ordering(491) 00:20:50.499 fused_ordering(492) 00:20:50.499 fused_ordering(493) 00:20:50.499 fused_ordering(494) 00:20:50.499 fused_ordering(495) 00:20:50.499 fused_ordering(496) 00:20:50.499 fused_ordering(497) 00:20:50.499 fused_ordering(498) 00:20:50.499 fused_ordering(499) 00:20:50.499 fused_ordering(500) 00:20:50.499 fused_ordering(501) 00:20:50.499 fused_ordering(502) 00:20:50.499 fused_ordering(503) 00:20:50.499 fused_ordering(504) 00:20:50.499 fused_ordering(505) 00:20:50.499 fused_ordering(506) 00:20:50.499 fused_ordering(507) 00:20:50.499 fused_ordering(508) 00:20:50.499 fused_ordering(509) 00:20:50.499 fused_ordering(510) 00:20:50.499 fused_ordering(511) 00:20:50.499 fused_ordering(512) 00:20:50.499 fused_ordering(513) 00:20:50.499 fused_ordering(514) 00:20:50.499 fused_ordering(515) 00:20:50.499 fused_ordering(516) 00:20:50.499 fused_ordering(517) 00:20:50.499 fused_ordering(518) 00:20:50.499 fused_ordering(519) 00:20:50.499 fused_ordering(520) 00:20:50.499 fused_ordering(521) 00:20:50.499 fused_ordering(522) 00:20:50.499 fused_ordering(523) 00:20:50.499 fused_ordering(524) 00:20:50.499 fused_ordering(525) 00:20:50.499 fused_ordering(526) 00:20:50.499 fused_ordering(527) 00:20:50.499 fused_ordering(528) 00:20:50.499 fused_ordering(529) 00:20:50.499 fused_ordering(530) 00:20:50.499 fused_ordering(531) 00:20:50.499 fused_ordering(532) 00:20:50.499 fused_ordering(533) 00:20:50.499 fused_ordering(534) 00:20:50.499 fused_ordering(535) 00:20:50.500 fused_ordering(536) 00:20:50.500 fused_ordering(537) 00:20:50.500 fused_ordering(538) 00:20:50.500 fused_ordering(539) 00:20:50.500 fused_ordering(540) 00:20:50.500 fused_ordering(541) 00:20:50.500 fused_ordering(542) 00:20:50.500 fused_ordering(543) 00:20:50.500 fused_ordering(544) 00:20:50.500 fused_ordering(545) 00:20:50.500 fused_ordering(546) 00:20:50.500 fused_ordering(547) 00:20:50.500 fused_ordering(548) 00:20:50.500 fused_ordering(549) 00:20:50.500 fused_ordering(550) 00:20:50.500 fused_ordering(551) 00:20:50.500 fused_ordering(552) 00:20:50.500 fused_ordering(553) 00:20:50.500 fused_ordering(554) 00:20:50.500 fused_ordering(555) 00:20:50.500 fused_ordering(556) 00:20:50.500 fused_ordering(557) 00:20:50.500 fused_ordering(558) 00:20:50.500 fused_ordering(559) 00:20:50.500 fused_ordering(560) 00:20:50.500 fused_ordering(561) 00:20:50.500 fused_ordering(562) 00:20:50.500 fused_ordering(563) 00:20:50.500 fused_ordering(564) 00:20:50.500 fused_ordering(565) 00:20:50.500 fused_ordering(566) 00:20:50.500 fused_ordering(567) 00:20:50.500 fused_ordering(568) 00:20:50.500 fused_ordering(569) 00:20:50.500 fused_ordering(570) 00:20:50.500 fused_ordering(571) 00:20:50.500 fused_ordering(572) 00:20:50.500 fused_ordering(573) 00:20:50.500 fused_ordering(574) 00:20:50.500 fused_ordering(575) 00:20:50.500 fused_ordering(576) 00:20:50.500 fused_ordering(577) 00:20:50.500 fused_ordering(578) 00:20:50.500 fused_ordering(579) 00:20:50.500 fused_ordering(580) 00:20:50.500 fused_ordering(581) 00:20:50.500 fused_ordering(582) 00:20:50.500 fused_ordering(583) 00:20:50.500 fused_ordering(584) 00:20:50.500 fused_ordering(585) 00:20:50.500 fused_ordering(586) 00:20:50.500 fused_ordering(587) 00:20:50.500 fused_ordering(588) 00:20:50.500 fused_ordering(589) 00:20:50.500 fused_ordering(590) 00:20:50.500 fused_ordering(591) 00:20:50.500 fused_ordering(592) 00:20:50.500 fused_ordering(593) 00:20:50.500 fused_ordering(594) 00:20:50.500 fused_ordering(595) 00:20:50.500 fused_ordering(596) 00:20:50.500 fused_ordering(597) 00:20:50.500 fused_ordering(598) 00:20:50.500 fused_ordering(599) 00:20:50.500 fused_ordering(600) 00:20:50.500 fused_ordering(601) 00:20:50.500 fused_ordering(602) 00:20:50.500 fused_ordering(603) 00:20:50.500 fused_ordering(604) 00:20:50.500 fused_ordering(605) 00:20:50.500 fused_ordering(606) 00:20:50.500 fused_ordering(607) 00:20:50.500 fused_ordering(608) 00:20:50.500 fused_ordering(609) 00:20:50.500 fused_ordering(610) 00:20:50.500 fused_ordering(611) 00:20:50.500 fused_ordering(612) 00:20:50.500 fused_ordering(613) 00:20:50.500 fused_ordering(614) 00:20:50.500 fused_ordering(615) 00:20:51.073 fused_ordering(616) 00:20:51.073 fused_ordering(617) 00:20:51.073 fused_ordering(618) 00:20:51.073 fused_ordering(619) 00:20:51.073 fused_ordering(620) 00:20:51.073 fused_ordering(621) 00:20:51.073 fused_ordering(622) 00:20:51.073 fused_ordering(623) 00:20:51.073 fused_ordering(624) 00:20:51.073 fused_ordering(625) 00:20:51.073 fused_ordering(626) 00:20:51.073 fused_ordering(627) 00:20:51.073 fused_ordering(628) 00:20:51.073 fused_ordering(629) 00:20:51.073 fused_ordering(630) 00:20:51.073 fused_ordering(631) 00:20:51.073 fused_ordering(632) 00:20:51.073 fused_ordering(633) 00:20:51.073 fused_ordering(634) 00:20:51.073 fused_ordering(635) 00:20:51.073 fused_ordering(636) 00:20:51.073 fused_ordering(637) 00:20:51.073 fused_ordering(638) 00:20:51.073 fused_ordering(639) 00:20:51.073 fused_ordering(640) 00:20:51.073 fused_ordering(641) 00:20:51.073 fused_ordering(642) 00:20:51.073 fused_ordering(643) 00:20:51.073 fused_ordering(644) 00:20:51.073 fused_ordering(645) 00:20:51.073 fused_ordering(646) 00:20:51.073 fused_ordering(647) 00:20:51.073 fused_ordering(648) 00:20:51.073 fused_ordering(649) 00:20:51.073 fused_ordering(650) 00:20:51.073 fused_ordering(651) 00:20:51.073 fused_ordering(652) 00:20:51.073 fused_ordering(653) 00:20:51.073 fused_ordering(654) 00:20:51.073 fused_ordering(655) 00:20:51.073 fused_ordering(656) 00:20:51.073 fused_ordering(657) 00:20:51.073 fused_ordering(658) 00:20:51.073 fused_ordering(659) 00:20:51.073 fused_ordering(660) 00:20:51.073 fused_ordering(661) 00:20:51.073 fused_ordering(662) 00:20:51.073 fused_ordering(663) 00:20:51.073 fused_ordering(664) 00:20:51.073 fused_ordering(665) 00:20:51.073 fused_ordering(666) 00:20:51.073 fused_ordering(667) 00:20:51.073 fused_ordering(668) 00:20:51.073 fused_ordering(669) 00:20:51.073 fused_ordering(670) 00:20:51.073 fused_ordering(671) 00:20:51.073 fused_ordering(672) 00:20:51.073 fused_ordering(673) 00:20:51.073 fused_ordering(674) 00:20:51.073 fused_ordering(675) 00:20:51.073 fused_ordering(676) 00:20:51.073 fused_ordering(677) 00:20:51.073 fused_ordering(678) 00:20:51.073 fused_ordering(679) 00:20:51.073 fused_ordering(680) 00:20:51.073 fused_ordering(681) 00:20:51.073 fused_ordering(682) 00:20:51.073 fused_ordering(683) 00:20:51.073 fused_ordering(684) 00:20:51.073 fused_ordering(685) 00:20:51.073 fused_ordering(686) 00:20:51.073 fused_ordering(687) 00:20:51.073 fused_ordering(688) 00:20:51.073 fused_ordering(689) 00:20:51.073 fused_ordering(690) 00:20:51.073 fused_ordering(691) 00:20:51.073 fused_ordering(692) 00:20:51.073 fused_ordering(693) 00:20:51.073 fused_ordering(694) 00:20:51.073 fused_ordering(695) 00:20:51.073 fused_ordering(696) 00:20:51.073 fused_ordering(697) 00:20:51.073 fused_ordering(698) 00:20:51.073 fused_ordering(699) 00:20:51.073 fused_ordering(700) 00:20:51.073 fused_ordering(701) 00:20:51.073 fused_ordering(702) 00:20:51.073 fused_ordering(703) 00:20:51.073 fused_ordering(704) 00:20:51.073 fused_ordering(705) 00:20:51.073 fused_ordering(706) 00:20:51.073 fused_ordering(707) 00:20:51.073 fused_ordering(708) 00:20:51.073 fused_ordering(709) 00:20:51.073 fused_ordering(710) 00:20:51.073 fused_ordering(711) 00:20:51.073 fused_ordering(712) 00:20:51.073 fused_ordering(713) 00:20:51.073 fused_ordering(714) 00:20:51.073 fused_ordering(715) 00:20:51.073 fused_ordering(716) 00:20:51.073 fused_ordering(717) 00:20:51.073 fused_ordering(718) 00:20:51.073 fused_ordering(719) 00:20:51.073 fused_ordering(720) 00:20:51.073 fused_ordering(721) 00:20:51.073 fused_ordering(722) 00:20:51.073 fused_ordering(723) 00:20:51.073 fused_ordering(724) 00:20:51.073 fused_ordering(725) 00:20:51.073 fused_ordering(726) 00:20:51.073 fused_ordering(727) 00:20:51.073 fused_ordering(728) 00:20:51.073 fused_ordering(729) 00:20:51.073 fused_ordering(730) 00:20:51.073 fused_ordering(731) 00:20:51.073 fused_ordering(732) 00:20:51.073 fused_ordering(733) 00:20:51.073 fused_ordering(734) 00:20:51.073 fused_ordering(735) 00:20:51.073 fused_ordering(736) 00:20:51.073 fused_ordering(737) 00:20:51.073 fused_ordering(738) 00:20:51.073 fused_ordering(739) 00:20:51.073 fused_ordering(740) 00:20:51.073 fused_ordering(741) 00:20:51.073 fused_ordering(742) 00:20:51.073 fused_ordering(743) 00:20:51.073 fused_ordering(744) 00:20:51.073 fused_ordering(745) 00:20:51.073 fused_ordering(746) 00:20:51.073 fused_ordering(747) 00:20:51.073 fused_ordering(748) 00:20:51.073 fused_ordering(749) 00:20:51.073 fused_ordering(750) 00:20:51.073 fused_ordering(751) 00:20:51.073 fused_ordering(752) 00:20:51.073 fused_ordering(753) 00:20:51.073 fused_ordering(754) 00:20:51.073 fused_ordering(755) 00:20:51.073 fused_ordering(756) 00:20:51.073 fused_ordering(757) 00:20:51.073 fused_ordering(758) 00:20:51.073 fused_ordering(759) 00:20:51.073 fused_ordering(760) 00:20:51.073 fused_ordering(761) 00:20:51.073 fused_ordering(762) 00:20:51.073 fused_ordering(763) 00:20:51.073 fused_ordering(764) 00:20:51.073 fused_ordering(765) 00:20:51.073 fused_ordering(766) 00:20:51.073 fused_ordering(767) 00:20:51.073 fused_ordering(768) 00:20:51.073 fused_ordering(769) 00:20:51.073 fused_ordering(770) 00:20:51.073 fused_ordering(771) 00:20:51.073 fused_ordering(772) 00:20:51.073 fused_ordering(773) 00:20:51.073 fused_ordering(774) 00:20:51.073 fused_ordering(775) 00:20:51.073 fused_ordering(776) 00:20:51.073 fused_ordering(777) 00:20:51.073 fused_ordering(778) 00:20:51.073 fused_ordering(779) 00:20:51.073 fused_ordering(780) 00:20:51.073 fused_ordering(781) 00:20:51.073 fused_ordering(782) 00:20:51.074 fused_ordering(783) 00:20:51.074 fused_ordering(784) 00:20:51.074 fused_ordering(785) 00:20:51.074 fused_ordering(786) 00:20:51.074 fused_ordering(787) 00:20:51.074 fused_ordering(788) 00:20:51.074 fused_ordering(789) 00:20:51.074 fused_ordering(790) 00:20:51.074 fused_ordering(791) 00:20:51.074 fused_ordering(792) 00:20:51.074 fused_ordering(793) 00:20:51.074 fused_ordering(794) 00:20:51.074 fused_ordering(795) 00:20:51.074 fused_ordering(796) 00:20:51.074 fused_ordering(797) 00:20:51.074 fused_ordering(798) 00:20:51.074 fused_ordering(799) 00:20:51.074 fused_ordering(800) 00:20:51.074 fused_ordering(801) 00:20:51.074 fused_ordering(802) 00:20:51.074 fused_ordering(803) 00:20:51.074 fused_ordering(804) 00:20:51.074 fused_ordering(805) 00:20:51.074 fused_ordering(806) 00:20:51.074 fused_ordering(807) 00:20:51.074 fused_ordering(808) 00:20:51.074 fused_ordering(809) 00:20:51.074 fused_ordering(810) 00:20:51.074 fused_ordering(811) 00:20:51.074 fused_ordering(812) 00:20:51.074 fused_ordering(813) 00:20:51.074 fused_ordering(814) 00:20:51.074 fused_ordering(815) 00:20:51.074 fused_ordering(816) 00:20:51.074 fused_ordering(817) 00:20:51.074 fused_ordering(818) 00:20:51.074 fused_ordering(819) 00:20:51.074 fused_ordering(820) 00:20:52.017 fused_ordering(821) 00:20:52.017 fused_ordering(822) 00:20:52.017 fused_ordering(823) 00:20:52.017 fused_ordering(824) 00:20:52.017 fused_ordering(825) 00:20:52.017 fused_ordering(826) 00:20:52.017 fused_ordering(827) 00:20:52.017 fused_ordering(828) 00:20:52.017 fused_ordering(829) 00:20:52.017 fused_ordering(830) 00:20:52.017 fused_ordering(831) 00:20:52.017 fused_ordering(832) 00:20:52.017 fused_ordering(833) 00:20:52.017 fused_ordering(834) 00:20:52.017 fused_ordering(835) 00:20:52.017 fused_ordering(836) 00:20:52.017 fused_ordering(837) 00:20:52.017 fused_ordering(838) 00:20:52.017 fused_ordering(839) 00:20:52.017 fused_ordering(840) 00:20:52.017 fused_ordering(841) 00:20:52.017 fused_ordering(842) 00:20:52.017 fused_ordering(843) 00:20:52.017 fused_ordering(844) 00:20:52.017 fused_ordering(845) 00:20:52.017 fused_ordering(846) 00:20:52.017 fused_ordering(847) 00:20:52.017 fused_ordering(848) 00:20:52.017 fused_ordering(849) 00:20:52.017 fused_ordering(850) 00:20:52.017 fused_ordering(851) 00:20:52.017 fused_ordering(852) 00:20:52.017 fused_ordering(853) 00:20:52.017 fused_ordering(854) 00:20:52.017 fused_ordering(855) 00:20:52.017 fused_ordering(856) 00:20:52.017 fused_ordering(857) 00:20:52.017 fused_ordering(858) 00:20:52.017 fused_ordering(859) 00:20:52.017 fused_ordering(860) 00:20:52.017 fused_ordering(861) 00:20:52.017 fused_ordering(862) 00:20:52.017 fused_ordering(863) 00:20:52.017 fused_ordering(864) 00:20:52.017 fused_ordering(865) 00:20:52.017 fused_ordering(866) 00:20:52.017 fused_ordering(867) 00:20:52.017 fused_ordering(868) 00:20:52.017 fused_ordering(869) 00:20:52.017 fused_ordering(870) 00:20:52.017 fused_ordering(871) 00:20:52.017 fused_ordering(872) 00:20:52.017 fused_ordering(873) 00:20:52.017 fused_ordering(874) 00:20:52.017 fused_ordering(875) 00:20:52.017 fused_ordering(876) 00:20:52.017 fused_ordering(877) 00:20:52.017 fused_ordering(878) 00:20:52.017 fused_ordering(879) 00:20:52.017 fused_ordering(880) 00:20:52.017 fused_ordering(881) 00:20:52.017 fused_ordering(882) 00:20:52.017 fused_ordering(883) 00:20:52.017 fused_ordering(884) 00:20:52.017 fused_ordering(885) 00:20:52.017 fused_ordering(886) 00:20:52.017 fused_ordering(887) 00:20:52.017 fused_ordering(888) 00:20:52.017 fused_ordering(889) 00:20:52.017 fused_ordering(890) 00:20:52.017 fused_ordering(891) 00:20:52.017 fused_ordering(892) 00:20:52.017 fused_ordering(893) 00:20:52.017 fused_ordering(894) 00:20:52.017 fused_ordering(895) 00:20:52.017 fused_ordering(896) 00:20:52.017 fused_ordering(897) 00:20:52.017 fused_ordering(898) 00:20:52.017 fused_ordering(899) 00:20:52.017 fused_ordering(900) 00:20:52.017 fused_ordering(901) 00:20:52.017 fused_ordering(902) 00:20:52.017 fused_ordering(903) 00:20:52.017 fused_ordering(904) 00:20:52.017 fused_ordering(905) 00:20:52.017 fused_ordering(906) 00:20:52.017 fused_ordering(907) 00:20:52.017 fused_ordering(908) 00:20:52.017 fused_ordering(909) 00:20:52.017 fused_ordering(910) 00:20:52.017 fused_ordering(911) 00:20:52.017 fused_ordering(912) 00:20:52.017 fused_ordering(913) 00:20:52.017 fused_ordering(914) 00:20:52.017 fused_ordering(915) 00:20:52.017 fused_ordering(916) 00:20:52.017 fused_ordering(917) 00:20:52.017 fused_ordering(918) 00:20:52.017 fused_ordering(919) 00:20:52.017 fused_ordering(920) 00:20:52.017 fused_ordering(921) 00:20:52.017 fused_ordering(922) 00:20:52.017 fused_ordering(923) 00:20:52.017 fused_ordering(924) 00:20:52.017 fused_ordering(925) 00:20:52.017 fused_ordering(926) 00:20:52.017 fused_ordering(927) 00:20:52.017 fused_ordering(928) 00:20:52.018 fused_ordering(929) 00:20:52.018 fused_ordering(930) 00:20:52.018 fused_ordering(931) 00:20:52.018 fused_ordering(932) 00:20:52.018 fused_ordering(933) 00:20:52.018 fused_ordering(934) 00:20:52.018 fused_ordering(935) 00:20:52.018 fused_ordering(936) 00:20:52.018 fused_ordering(937) 00:20:52.018 fused_ordering(938) 00:20:52.018 fused_ordering(939) 00:20:52.018 fused_ordering(940) 00:20:52.018 fused_ordering(941) 00:20:52.018 fused_ordering(942) 00:20:52.018 fused_ordering(943) 00:20:52.018 fused_ordering(944) 00:20:52.018 fused_ordering(945) 00:20:52.018 fused_ordering(946) 00:20:52.018 fused_ordering(947) 00:20:52.018 fused_ordering(948) 00:20:52.018 fused_ordering(949) 00:20:52.018 fused_ordering(950) 00:20:52.018 fused_ordering(951) 00:20:52.018 fused_ordering(952) 00:20:52.018 fused_ordering(953) 00:20:52.018 fused_ordering(954) 00:20:52.018 fused_ordering(955) 00:20:52.018 fused_ordering(956) 00:20:52.018 fused_ordering(957) 00:20:52.018 fused_ordering(958) 00:20:52.018 fused_ordering(959) 00:20:52.018 fused_ordering(960) 00:20:52.018 fused_ordering(961) 00:20:52.018 fused_ordering(962) 00:20:52.018 fused_ordering(963) 00:20:52.018 fused_ordering(964) 00:20:52.018 fused_ordering(965) 00:20:52.018 fused_ordering(966) 00:20:52.018 fused_ordering(967) 00:20:52.018 fused_ordering(968) 00:20:52.018 fused_ordering(969) 00:20:52.018 fused_ordering(970) 00:20:52.018 fused_ordering(971) 00:20:52.018 fused_ordering(972) 00:20:52.018 fused_ordering(973) 00:20:52.018 fused_ordering(974) 00:20:52.018 fused_ordering(975) 00:20:52.018 fused_ordering(976) 00:20:52.018 fused_ordering(977) 00:20:52.018 fused_ordering(978) 00:20:52.018 fused_ordering(979) 00:20:52.018 fused_ordering(980) 00:20:52.018 fused_ordering(981) 00:20:52.018 fused_ordering(982) 00:20:52.018 fused_ordering(983) 00:20:52.018 fused_ordering(984) 00:20:52.018 fused_ordering(985) 00:20:52.018 fused_ordering(986) 00:20:52.018 fused_ordering(987) 00:20:52.018 fused_ordering(988) 00:20:52.018 fused_ordering(989) 00:20:52.018 fused_ordering(990) 00:20:52.018 fused_ordering(991) 00:20:52.018 fused_ordering(992) 00:20:52.018 fused_ordering(993) 00:20:52.018 fused_ordering(994) 00:20:52.018 fused_ordering(995) 00:20:52.018 fused_ordering(996) 00:20:52.018 fused_ordering(997) 00:20:52.018 fused_ordering(998) 00:20:52.018 fused_ordering(999) 00:20:52.018 fused_ordering(1000) 00:20:52.018 fused_ordering(1001) 00:20:52.018 fused_ordering(1002) 00:20:52.018 fused_ordering(1003) 00:20:52.018 fused_ordering(1004) 00:20:52.018 fused_ordering(1005) 00:20:52.018 fused_ordering(1006) 00:20:52.018 fused_ordering(1007) 00:20:52.018 fused_ordering(1008) 00:20:52.018 fused_ordering(1009) 00:20:52.018 fused_ordering(1010) 00:20:52.018 fused_ordering(1011) 00:20:52.018 fused_ordering(1012) 00:20:52.018 fused_ordering(1013) 00:20:52.018 fused_ordering(1014) 00:20:52.018 fused_ordering(1015) 00:20:52.018 fused_ordering(1016) 00:20:52.018 fused_ordering(1017) 00:20:52.018 fused_ordering(1018) 00:20:52.018 fused_ordering(1019) 00:20:52.018 fused_ordering(1020) 00:20:52.018 fused_ordering(1021) 00:20:52.018 fused_ordering(1022) 00:20:52.018 fused_ordering(1023) 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.018 rmmod nvme_tcp 00:20:52.018 rmmod nvme_fabrics 00:20:52.018 rmmod nvme_keyring 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2902114 ']' 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2902114 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2902114 ']' 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2902114 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2902114 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2902114' 00:20:52.018 killing process with pid 2902114 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2902114 00:20:52.018 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2902114 00:20:52.959 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:52.959 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:52.959 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:52.959 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.959 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.959 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.959 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.959 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.558 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.558 00:20:55.558 real 0m14.251s 00:20:55.558 user 0m8.359s 00:20:55.558 sys 0m7.115s 00:20:55.558 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:55.558 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:55.558 ************************************ 00:20:55.558 END TEST nvmf_fused_ordering 00:20:55.558 ************************************ 00:20:55.558 19:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:55.558 19:25:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:20:55.558 19:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:55.558 19:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.558 19:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:55.558 ************************************ 00:20:55.558 START TEST nvmf_ns_masking 00:20:55.558 ************************************ 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:20:55.558 * Looking for test storage... 00:20:55.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.558 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c92d02f3-d8d3-4a59-9d7d-a4660caf025d 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4e48eb38-c666-49cb-8d58-1f9df1b58bfa 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8e0b73dd-9ccd-4072-8557-833e9c3528ef 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:20:55.559 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:02.152 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:02.152 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:02.152 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:02.152 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.152 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.152 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.152 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.152 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.152 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:21:02.414 00:21:02.414 --- 10.0.0.2 ping statistics --- 00:21:02.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.414 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:21:02.414 00:21:02.414 --- 10.0.0.1 ping statistics --- 00:21:02.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.414 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2907093 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2907093 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2907093 ']' 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.414 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:02.414 [2024-07-22 19:25:21.316603] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:02.414 [2024-07-22 19:25:21.316728] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.676 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.676 [2024-07-22 19:25:21.448896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.936 [2024-07-22 19:25:21.635236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.936 [2024-07-22 19:25:21.635281] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.936 [2024-07-22 19:25:21.635294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.936 [2024-07-22 19:25:21.635303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.936 [2024-07-22 19:25:21.635315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.936 [2024-07-22 19:25:21.635341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.197 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.197 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:21:03.197 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.197 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.197 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:03.197 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.197 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:03.457 [2024-07-22 19:25:22.228808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.457 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:21:03.457 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:21:03.457 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:03.718 Malloc1 00:21:03.718 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:03.718 Malloc2 00:21:03.979 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:03.979 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:21:04.240 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.240 [2024-07-22 19:25:23.128903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.240 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:21:04.240 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8e0b73dd-9ccd-4072-8557-833e9c3528ef -a 10.0.0.2 -s 4420 -i 4 00:21:04.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:21:04.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:21:04.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:04.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:04.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:06.432 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:06.693 [ 0]:0x1 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5eb6c23f4d4d4444b00380337f14e47a 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5eb6c23f4d4d4444b00380337f14e47a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:06.693 [ 0]:0x1 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:06.693 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5eb6c23f4d4d4444b00380337f14e47a 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5eb6c23f4d4d4444b00380337f14e47a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:06.955 [ 1]:0x2 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7e2fa7f45164ac5a2db94797b511316 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7e2fa7f45164ac5a2db94797b511316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:06.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:06.955 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:07.215 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:21:07.215 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:21:07.215 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8e0b73dd-9ccd-4072-8557-833e9c3528ef -a 10.0.0.2 -s 4420 -i 4 00:21:07.477 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:21:07.477 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:21:07.477 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:07.477 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:21:07.477 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:21:07.477 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:10.023 [ 0]:0x2 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7e2fa7f45164ac5a2db94797b511316 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7e2fa7f45164ac5a2db94797b511316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.023 [ 0]:0x1 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5eb6c23f4d4d4444b00380337f14e47a 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5eb6c23f4d4d4444b00380337f14e47a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:10.023 [ 1]:0x2 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7e2fa7f45164ac5a2db94797b511316 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7e2fa7f45164ac5a2db94797b511316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.023 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:10.284 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:21:10.284 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:10.284 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:21:10.284 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:21:10.284 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.284 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:21:10.284 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.284 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:10.284 [ 0]:0x2 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7e2fa7f45164ac5a2db94797b511316 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7e2fa7f45164ac5a2db94797b511316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:10.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:10.284 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:10.544 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:21:10.544 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8e0b73dd-9ccd-4072-8557-833e9c3528ef -a 10.0.0.2 -s 4420 -i 4 00:21:10.544 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:10.544 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:21:10.544 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:10.544 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:21:10.544 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:21:10.544 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:21:13.088 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:13.088 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:13.088 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:13.089 [ 0]:0x1 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5eb6c23f4d4d4444b00380337f14e47a 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5eb6c23f4d4d4444b00380337f14e47a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:13.089 [ 1]:0x2 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7e2fa7f45164ac5a2db94797b511316 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7e2fa7f45164ac5a2db94797b511316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:13.089 [ 0]:0x2 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:13.089 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7e2fa7f45164ac5a2db94797b511316 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7e2fa7f45164ac5a2db94797b511316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:13.089 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:13.350 [2024-07-22 19:25:32.154288] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:21:13.350 request: 00:21:13.350 { 00:21:13.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.350 "nsid": 2, 00:21:13.350 "host": "nqn.2016-06.io.spdk:host1", 00:21:13.350 "method": "nvmf_ns_remove_host", 00:21:13.350 "req_id": 1 00:21:13.350 } 00:21:13.350 Got JSON-RPC error response 00:21:13.350 response: 00:21:13.350 { 00:21:13.350 "code": -32602, 00:21:13.350 "message": "Invalid parameters" 00:21:13.350 } 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:13.350 [ 0]:0x2 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:13.350 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7e2fa7f45164ac5a2db94797b511316 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7e2fa7f45164ac5a2db94797b511316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:13.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2909553 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2909553 /var/tmp/host.sock 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2909553 ']' 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:13.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.610 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:13.870 [2024-07-22 19:25:32.581741] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:13.870 [2024-07-22 19:25:32.581853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909553 ] 00:21:13.870 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.870 [2024-07-22 19:25:32.708958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.130 [2024-07-22 19:25:32.885429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.698 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.698 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:21:14.698 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:14.698 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:14.960 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c92d02f3-d8d3-4a59-9d7d-a4660caf025d 00:21:14.960 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:21:14.960 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C92D02F3D8D34A599D7DA4660CAF025D -i 00:21:15.221 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4e48eb38-c666-49cb-8d58-1f9df1b58bfa 00:21:15.221 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:21:15.221 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4E48EB38C66649CB8D581F9DF1B58BFA -i 00:21:15.221 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:15.481 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:21:15.481 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:15.481 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:15.742 nvme0n1 00:21:15.742 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:15.742 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:16.003 nvme1n2 00:21:16.003 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:21:16.003 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:21:16.003 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:21:16.003 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:16.003 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:21:16.264 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:21:16.264 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:21:16.264 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:21:16.264 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:21:16.264 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c92d02f3-d8d3-4a59-9d7d-a4660caf025d == \c\9\2\d\0\2\f\3\-\d\8\d\3\-\4\a\5\9\-\9\d\7\d\-\a\4\6\6\0\c\a\f\0\2\5\d ]] 00:21:16.264 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:21:16.264 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:21:16.264 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4e48eb38-c666-49cb-8d58-1f9df1b58bfa == \4\e\4\8\e\b\3\8\-\c\6\6\6\-\4\9\c\b\-\8\d\5\8\-\1\f\9\d\f\1\b\5\8\b\f\a ]] 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2909553 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2909553 ']' 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2909553 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2909553 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2909553' 00:21:16.525 killing process with pid 2909553 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2909553 00:21:16.525 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2909553 00:21:17.950 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.211 rmmod nvme_tcp 00:21:18.211 rmmod nvme_fabrics 00:21:18.211 rmmod nvme_keyring 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2907093 ']' 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2907093 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2907093 ']' 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2907093 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2907093 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2907093' 00:21:18.211 killing process with pid 2907093 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2907093 00:21:18.211 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2907093 00:21:19.610 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.610 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.610 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.610 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.610 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.610 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.610 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.610 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.525 00:21:21.525 real 0m26.299s 00:21:21.525 user 0m27.285s 00:21:21.525 sys 0m7.483s 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:21.525 ************************************ 00:21:21.525 END TEST nvmf_ns_masking 00:21:21.525 ************************************ 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:21.525 ************************************ 00:21:21.525 START TEST nvmf_nvme_cli 00:21:21.525 ************************************ 00:21:21.525 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:21:21.787 * Looking for test storage... 00:21:21.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.787 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:29.936 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:29.936 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:29.936 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:29.936 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.936 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:29.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:21:29.937 00:21:29.937 --- 10.0.0.2 ping statistics --- 00:21:29.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.937 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:21:29.937 00:21:29.937 --- 10.0.0.1 ping statistics --- 00:21:29.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.937 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2914703 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2914703 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2914703 ']' 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.937 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 [2024-07-22 19:25:47.888941] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:29.937 [2024-07-22 19:25:47.889070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.937 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.937 [2024-07-22 19:25:48.024030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.937 [2024-07-22 19:25:48.208597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.937 [2024-07-22 19:25:48.208646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.937 [2024-07-22 19:25:48.208660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.937 [2024-07-22 19:25:48.208669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.937 [2024-07-22 19:25:48.208680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.937 [2024-07-22 19:25:48.208853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.937 [2024-07-22 19:25:48.208961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.937 [2024-07-22 19:25:48.209105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.937 [2024-07-22 19:25:48.209132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 [2024-07-22 19:25:48.685832] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 Malloc0 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 Malloc1 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 [2024-07-22 19:25:48.850530] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.937 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:21:30.198 00:21:30.198 Discovery Log Number of Records 2, Generation counter 2 00:21:30.198 =====Discovery Log Entry 0====== 00:21:30.198 trtype: tcp 00:21:30.198 adrfam: ipv4 00:21:30.198 subtype: current discovery subsystem 00:21:30.198 treq: not required 00:21:30.198 portid: 0 00:21:30.198 trsvcid: 4420 00:21:30.198 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:30.198 traddr: 10.0.0.2 00:21:30.198 eflags: explicit discovery connections, duplicate discovery information 00:21:30.198 sectype: none 00:21:30.198 =====Discovery Log Entry 1====== 00:21:30.198 trtype: tcp 00:21:30.198 adrfam: ipv4 00:21:30.198 subtype: nvme subsystem 00:21:30.198 treq: not required 00:21:30.198 portid: 0 00:21:30.198 trsvcid: 4420 00:21:30.198 subnqn: nqn.2016-06.io.spdk:cnode1 00:21:30.198 traddr: 10.0.0.2 00:21:30.198 eflags: none 00:21:30.198 sectype: none 00:21:30.198 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:21:30.198 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:21:30.198 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:30.198 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:30.198 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:30.198 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:30.198 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:30.198 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:30.198 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:30.198 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:21:30.198 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:32.113 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:32.113 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:21:32.113 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:32.113 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:21:32.113 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:21:32.113 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:21:34.027 /dev/nvme0n1 ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:34.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.027 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.027 rmmod nvme_tcp 00:21:34.027 rmmod nvme_fabrics 00:21:34.027 rmmod nvme_keyring 00:21:34.289 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.289 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2914703 ']' 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2914703 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2914703 ']' 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2914703 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2914703 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2914703' 00:21:34.289 killing process with pid 2914703 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2914703 00:21:34.289 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2914703 00:21:35.243 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.243 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.243 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.243 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.243 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.243 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.243 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.243 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.789 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.789 00:21:37.789 real 0m15.790s 00:21:37.789 user 0m24.790s 00:21:37.789 sys 0m6.119s 00:21:37.789 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.789 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:37.789 ************************************ 00:21:37.789 END TEST nvmf_nvme_cli 00:21:37.789 ************************************ 00:21:37.789 19:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:37.789 19:25:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:21:37.789 19:25:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:37.789 19:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:37.789 19:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.790 ************************************ 00:21:37.790 START TEST nvmf_auth_target 00:21:37.790 ************************************ 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:37.790 * Looking for test storage... 00:21:37.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.790 19:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:44.379 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:44.379 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.379 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:44.380 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:44.380 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.380 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.640 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.640 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.640 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.640 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.640 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.640 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.640 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:21:44.640 00:21:44.640 --- 10.0.0.2 ping statistics --- 00:21:44.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.640 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:21:44.641 00:21:44.641 --- 10.0.0.1 ping statistics --- 00:21:44.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.641 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2919985 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2919985 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2919985 ']' 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.641 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2920328 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a6d87260c164ca4f6a13cc78d4c7f939cfdc6254e965507a 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.j7K 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a6d87260c164ca4f6a13cc78d4c7f939cfdc6254e965507a 0 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a6d87260c164ca4f6a13cc78d4c7f939cfdc6254e965507a 0 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a6d87260c164ca4f6a13cc78d4c7f939cfdc6254e965507a 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.j7K 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.j7K 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.j7K 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e4fcc0ef71c87fa2b63394625430029a70e2996b1d092575595a5256b7ea00d0 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lMr 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e4fcc0ef71c87fa2b63394625430029a70e2996b1d092575595a5256b7ea00d0 3 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e4fcc0ef71c87fa2b63394625430029a70e2996b1d092575595a5256b7ea00d0 3 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e4fcc0ef71c87fa2b63394625430029a70e2996b1d092575595a5256b7ea00d0 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:45.582 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lMr 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lMr 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.lMr 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9aaa1dd125a5104564233dd5aae9d0f7 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.K8P 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9aaa1dd125a5104564233dd5aae9d0f7 1 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9aaa1dd125a5104564233dd5aae9d0f7 1 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9aaa1dd125a5104564233dd5aae9d0f7 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.K8P 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.K8P 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.K8P 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a2c2cc3d2a9a0eb3c5099d9ad292d80ea9e3d2f60b8301ca 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4iB 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a2c2cc3d2a9a0eb3c5099d9ad292d80ea9e3d2f60b8301ca 2 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a2c2cc3d2a9a0eb3c5099d9ad292d80ea9e3d2f60b8301ca 2 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a2c2cc3d2a9a0eb3c5099d9ad292d80ea9e3d2f60b8301ca 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4iB 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4iB 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.4iB 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:45.843 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a8c3afb0f1e86f5346bbe4cecf85497f8aca312960a7a712 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1TY 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a8c3afb0f1e86f5346bbe4cecf85497f8aca312960a7a712 2 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a8c3afb0f1e86f5346bbe4cecf85497f8aca312960a7a712 2 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a8c3afb0f1e86f5346bbe4cecf85497f8aca312960a7a712 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1TY 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1TY 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.1TY 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f6ead167c12a09e70000b898e79de701 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SuY 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f6ead167c12a09e70000b898e79de701 1 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f6ead167c12a09e70000b898e79de701 1 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f6ead167c12a09e70000b898e79de701 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:45.844 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SuY 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SuY 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.SuY 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e73e52c83493b02599d9a1a073916063ddb9b896b4be1ce6f77a41dd9294a0bf 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.322 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e73e52c83493b02599d9a1a073916063ddb9b896b4be1ce6f77a41dd9294a0bf 3 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e73e52c83493b02599d9a1a073916063ddb9b896b4be1ce6f77a41dd9294a0bf 3 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e73e52c83493b02599d9a1a073916063ddb9b896b4be1ce6f77a41dd9294a0bf 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.322 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.322 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.322 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2919985 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2919985 ']' 00:21:46.104 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.105 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.105 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.105 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.105 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.105 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.105 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:46.105 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2920328 /var/tmp/host.sock 00:21:46.105 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2920328 ']' 00:21:46.105 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:21:46.105 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.105 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:46.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:46.105 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.105 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.j7K 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.j7K 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.j7K 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.lMr ]] 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lMr 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lMr 00:21:46.716 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lMr 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.K8P 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.K8P 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.K8P 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.4iB ]] 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4iB 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4iB 00:21:46.990 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4iB 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1TY 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.1TY 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.1TY 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.SuY ]] 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SuY 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.250 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SuY 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SuY 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.322 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.322 00:21:47.510 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.322 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.769 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.770 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.770 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.770 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.770 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.770 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.770 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.029 00:21:48.029 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.029 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.029 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.289 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.289 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.289 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.289 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.289 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.289 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.289 { 00:21:48.289 "cntlid": 1, 00:21:48.289 "qid": 0, 00:21:48.289 "state": "enabled", 00:21:48.290 "thread": "nvmf_tgt_poll_group_000", 00:21:48.290 "listen_address": { 00:21:48.290 "trtype": "TCP", 00:21:48.290 "adrfam": "IPv4", 00:21:48.290 "traddr": "10.0.0.2", 00:21:48.290 "trsvcid": "4420" 00:21:48.290 }, 00:21:48.290 "peer_address": { 00:21:48.290 "trtype": "TCP", 00:21:48.290 "adrfam": "IPv4", 00:21:48.290 "traddr": "10.0.0.1", 00:21:48.290 "trsvcid": "35850" 00:21:48.290 }, 00:21:48.290 "auth": { 00:21:48.290 "state": "completed", 00:21:48.290 "digest": "sha256", 00:21:48.290 "dhgroup": "null" 00:21:48.290 } 00:21:48.290 } 00:21:48.290 ]' 00:21:48.290 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.290 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:48.290 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.290 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:48.290 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.290 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.290 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.290 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.550 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.491 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.752 00:21:49.752 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.752 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.752 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.013 { 00:21:50.013 "cntlid": 3, 00:21:50.013 "qid": 0, 00:21:50.013 "state": "enabled", 00:21:50.013 "thread": "nvmf_tgt_poll_group_000", 00:21:50.013 "listen_address": { 00:21:50.013 "trtype": "TCP", 00:21:50.013 "adrfam": "IPv4", 00:21:50.013 "traddr": "10.0.0.2", 00:21:50.013 "trsvcid": "4420" 00:21:50.013 }, 00:21:50.013 "peer_address": { 00:21:50.013 "trtype": "TCP", 00:21:50.013 "adrfam": "IPv4", 00:21:50.013 "traddr": "10.0.0.1", 00:21:50.013 "trsvcid": "35874" 00:21:50.013 }, 00:21:50.013 "auth": { 00:21:50.013 "state": "completed", 00:21:50.013 "digest": "sha256", 00:21:50.013 "dhgroup": "null" 00:21:50.013 } 00:21:50.013 } 00:21:50.013 ]' 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.013 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.273 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:21:50.844 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.105 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.365 00:21:51.366 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.366 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.366 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.626 { 00:21:51.626 "cntlid": 5, 00:21:51.626 "qid": 0, 00:21:51.626 "state": "enabled", 00:21:51.626 "thread": "nvmf_tgt_poll_group_000", 00:21:51.626 "listen_address": { 00:21:51.626 "trtype": "TCP", 00:21:51.626 "adrfam": "IPv4", 00:21:51.626 "traddr": "10.0.0.2", 00:21:51.626 "trsvcid": "4420" 00:21:51.626 }, 00:21:51.626 "peer_address": { 00:21:51.626 "trtype": "TCP", 00:21:51.626 "adrfam": "IPv4", 00:21:51.626 "traddr": "10.0.0.1", 00:21:51.626 "trsvcid": "35914" 00:21:51.626 }, 00:21:51.626 "auth": { 00:21:51.626 "state": "completed", 00:21:51.626 "digest": "sha256", 00:21:51.626 "dhgroup": "null" 00:21:51.626 } 00:21:51.626 } 00:21:51.626 ]' 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.626 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.887 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:21:52.459 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.719 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.719 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.719 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.719 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.719 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.719 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.720 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.980 00:21:52.980 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.980 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.980 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.241 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.241 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.241 { 00:21:53.241 "cntlid": 7, 00:21:53.241 "qid": 0, 00:21:53.241 "state": "enabled", 00:21:53.241 "thread": "nvmf_tgt_poll_group_000", 00:21:53.241 "listen_address": { 00:21:53.241 "trtype": "TCP", 00:21:53.241 "adrfam": "IPv4", 00:21:53.241 "traddr": "10.0.0.2", 00:21:53.241 "trsvcid": "4420" 00:21:53.241 }, 00:21:53.241 "peer_address": { 00:21:53.241 "trtype": "TCP", 00:21:53.241 "adrfam": "IPv4", 00:21:53.241 "traddr": "10.0.0.1", 00:21:53.241 "trsvcid": "35928" 00:21:53.241 }, 00:21:53.241 "auth": { 00:21:53.241 "state": "completed", 00:21:53.241 "digest": "sha256", 00:21:53.241 "dhgroup": "null" 00:21:53.241 } 00:21:53.241 } 00:21:53.241 ]' 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.241 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.502 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.457 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.722 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.722 { 00:21:54.722 "cntlid": 9, 00:21:54.722 "qid": 0, 00:21:54.722 "state": "enabled", 00:21:54.722 "thread": "nvmf_tgt_poll_group_000", 00:21:54.722 "listen_address": { 00:21:54.722 "trtype": "TCP", 00:21:54.722 "adrfam": "IPv4", 00:21:54.722 "traddr": "10.0.0.2", 00:21:54.722 "trsvcid": "4420" 00:21:54.722 }, 00:21:54.722 "peer_address": { 00:21:54.722 "trtype": "TCP", 00:21:54.722 "adrfam": "IPv4", 00:21:54.722 "traddr": "10.0.0.1", 00:21:54.722 "trsvcid": "35956" 00:21:54.722 }, 00:21:54.722 "auth": { 00:21:54.722 "state": "completed", 00:21:54.722 "digest": "sha256", 00:21:54.722 "dhgroup": "ffdhe2048" 00:21:54.722 } 00:21:54.722 } 00:21:54.722 ]' 00:21:54.722 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.983 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.983 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.983 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:54.983 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.983 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.983 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.983 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.244 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:21:55.816 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.816 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:55.816 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.816 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.816 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.816 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.816 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:55.816 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.078 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.338 00:21:56.338 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.338 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.338 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.338 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.338 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.338 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.338 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.599 { 00:21:56.599 "cntlid": 11, 00:21:56.599 "qid": 0, 00:21:56.599 "state": "enabled", 00:21:56.599 "thread": "nvmf_tgt_poll_group_000", 00:21:56.599 "listen_address": { 00:21:56.599 "trtype": "TCP", 00:21:56.599 "adrfam": "IPv4", 00:21:56.599 "traddr": "10.0.0.2", 00:21:56.599 "trsvcid": "4420" 00:21:56.599 }, 00:21:56.599 "peer_address": { 00:21:56.599 "trtype": "TCP", 00:21:56.599 "adrfam": "IPv4", 00:21:56.599 "traddr": "10.0.0.1", 00:21:56.599 "trsvcid": "58884" 00:21:56.599 }, 00:21:56.599 "auth": { 00:21:56.599 "state": "completed", 00:21:56.599 "digest": "sha256", 00:21:56.599 "dhgroup": "ffdhe2048" 00:21:56.599 } 00:21:56.599 } 00:21:56.599 ]' 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.599 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.859 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:21:57.431 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.431 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.431 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.431 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.431 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.431 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.431 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.431 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.692 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.953 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.953 { 00:21:57.953 "cntlid": 13, 00:21:57.953 "qid": 0, 00:21:57.953 "state": "enabled", 00:21:57.953 "thread": "nvmf_tgt_poll_group_000", 00:21:57.953 "listen_address": { 00:21:57.953 "trtype": "TCP", 00:21:57.953 "adrfam": "IPv4", 00:21:57.953 "traddr": "10.0.0.2", 00:21:57.953 "trsvcid": "4420" 00:21:57.953 }, 00:21:57.953 "peer_address": { 00:21:57.953 "trtype": "TCP", 00:21:57.953 "adrfam": "IPv4", 00:21:57.953 "traddr": "10.0.0.1", 00:21:57.953 "trsvcid": "58904" 00:21:57.953 }, 00:21:57.953 "auth": { 00:21:57.953 "state": "completed", 00:21:57.953 "digest": "sha256", 00:21:57.953 "dhgroup": "ffdhe2048" 00:21:57.953 } 00:21:57.953 } 00:21:57.953 ]' 00:21:57.953 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.214 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:58.214 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.214 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:58.214 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.214 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.214 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.214 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.475 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:21:59.046 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.046 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:59.046 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.046 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.046 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.046 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.046 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:59.046 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:59.307 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.308 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.308 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.569 { 00:21:59.569 "cntlid": 15, 00:21:59.569 "qid": 0, 00:21:59.569 "state": "enabled", 00:21:59.569 "thread": "nvmf_tgt_poll_group_000", 00:21:59.569 "listen_address": { 00:21:59.569 "trtype": "TCP", 00:21:59.569 "adrfam": "IPv4", 00:21:59.569 "traddr": "10.0.0.2", 00:21:59.569 "trsvcid": "4420" 00:21:59.569 }, 00:21:59.569 "peer_address": { 00:21:59.569 "trtype": "TCP", 00:21:59.569 "adrfam": "IPv4", 00:21:59.569 "traddr": "10.0.0.1", 00:21:59.569 "trsvcid": "58926" 00:21:59.569 }, 00:21:59.569 "auth": { 00:21:59.569 "state": "completed", 00:21:59.569 "digest": "sha256", 00:21:59.569 "dhgroup": "ffdhe2048" 00:21:59.569 } 00:21:59.569 } 00:21:59.569 ]' 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.569 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.828 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:59.828 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.828 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.828 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.828 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.828 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.771 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.032 00:22:01.032 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.032 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.032 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.294 { 00:22:01.294 "cntlid": 17, 00:22:01.294 "qid": 0, 00:22:01.294 "state": "enabled", 00:22:01.294 "thread": "nvmf_tgt_poll_group_000", 00:22:01.294 "listen_address": { 00:22:01.294 "trtype": "TCP", 00:22:01.294 "adrfam": "IPv4", 00:22:01.294 "traddr": "10.0.0.2", 00:22:01.294 "trsvcid": "4420" 00:22:01.294 }, 00:22:01.294 "peer_address": { 00:22:01.294 "trtype": "TCP", 00:22:01.294 "adrfam": "IPv4", 00:22:01.294 "traddr": "10.0.0.1", 00:22:01.294 "trsvcid": "58950" 00:22:01.294 }, 00:22:01.294 "auth": { 00:22:01.294 "state": "completed", 00:22:01.294 "digest": "sha256", 00:22:01.294 "dhgroup": "ffdhe3072" 00:22:01.294 } 00:22:01.294 } 00:22:01.294 ]' 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.294 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.555 19:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:22:02.497 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.497 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.497 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.497 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.497 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.497 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.497 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:02.497 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:02.497 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.498 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.759 00:22:02.759 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.759 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.759 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.020 { 00:22:03.020 "cntlid": 19, 00:22:03.020 "qid": 0, 00:22:03.020 "state": "enabled", 00:22:03.020 "thread": "nvmf_tgt_poll_group_000", 00:22:03.020 "listen_address": { 00:22:03.020 "trtype": "TCP", 00:22:03.020 "adrfam": "IPv4", 00:22:03.020 "traddr": "10.0.0.2", 00:22:03.020 "trsvcid": "4420" 00:22:03.020 }, 00:22:03.020 "peer_address": { 00:22:03.020 "trtype": "TCP", 00:22:03.020 "adrfam": "IPv4", 00:22:03.020 "traddr": "10.0.0.1", 00:22:03.020 "trsvcid": "58980" 00:22:03.020 }, 00:22:03.020 "auth": { 00:22:03.020 "state": "completed", 00:22:03.020 "digest": "sha256", 00:22:03.020 "dhgroup": "ffdhe3072" 00:22:03.020 } 00:22:03.020 } 00:22:03.020 ]' 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.020 19:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.282 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:22:03.931 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.931 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:03.931 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.931 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.931 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.931 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.931 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:03.931 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.193 19:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.454 00:22:04.454 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.454 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.454 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.454 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.454 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.454 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.454 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.454 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.715 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.715 { 00:22:04.715 "cntlid": 21, 00:22:04.715 "qid": 0, 00:22:04.715 "state": "enabled", 00:22:04.715 "thread": "nvmf_tgt_poll_group_000", 00:22:04.715 "listen_address": { 00:22:04.715 "trtype": "TCP", 00:22:04.715 "adrfam": "IPv4", 00:22:04.715 "traddr": "10.0.0.2", 00:22:04.715 "trsvcid": "4420" 00:22:04.715 }, 00:22:04.715 "peer_address": { 00:22:04.715 "trtype": "TCP", 00:22:04.715 "adrfam": "IPv4", 00:22:04.715 "traddr": "10.0.0.1", 00:22:04.715 "trsvcid": "59016" 00:22:04.715 }, 00:22:04.715 "auth": { 00:22:04.715 "state": "completed", 00:22:04.715 "digest": "sha256", 00:22:04.715 "dhgroup": "ffdhe3072" 00:22:04.715 } 00:22:04.715 } 00:22:04.715 ]' 00:22:04.715 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.715 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.715 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.715 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.715 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.715 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.715 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.715 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.976 19:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:22:05.548 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.548 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.548 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.548 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.548 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.548 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.548 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:05.548 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.809 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.069 00:22:06.069 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.069 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.070 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.070 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.070 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.070 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.070 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.070 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.070 19:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.070 { 00:22:06.070 "cntlid": 23, 00:22:06.070 "qid": 0, 00:22:06.070 "state": "enabled", 00:22:06.070 "thread": "nvmf_tgt_poll_group_000", 00:22:06.070 "listen_address": { 00:22:06.070 "trtype": "TCP", 00:22:06.070 "adrfam": "IPv4", 00:22:06.070 "traddr": "10.0.0.2", 00:22:06.070 "trsvcid": "4420" 00:22:06.070 }, 00:22:06.070 "peer_address": { 00:22:06.070 "trtype": "TCP", 00:22:06.070 "adrfam": "IPv4", 00:22:06.070 "traddr": "10.0.0.1", 00:22:06.070 "trsvcid": "54380" 00:22:06.070 }, 00:22:06.070 "auth": { 00:22:06.070 "state": "completed", 00:22:06.070 "digest": "sha256", 00:22:06.070 "dhgroup": "ffdhe3072" 00:22:06.070 } 00:22:06.070 } 00:22:06.070 ]' 00:22:06.070 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.331 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:06.331 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.331 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:06.331 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.331 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.331 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.331 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.592 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:22:07.162 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.163 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.163 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.163 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.163 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.163 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.163 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.163 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:07.163 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.423 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.683 00:22:07.684 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.684 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.684 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.944 { 00:22:07.944 "cntlid": 25, 00:22:07.944 "qid": 0, 00:22:07.944 "state": "enabled", 00:22:07.944 "thread": "nvmf_tgt_poll_group_000", 00:22:07.944 "listen_address": { 00:22:07.944 "trtype": "TCP", 00:22:07.944 "adrfam": "IPv4", 00:22:07.944 "traddr": "10.0.0.2", 00:22:07.944 "trsvcid": "4420" 00:22:07.944 }, 00:22:07.944 "peer_address": { 00:22:07.944 "trtype": "TCP", 00:22:07.944 "adrfam": "IPv4", 00:22:07.944 "traddr": "10.0.0.1", 00:22:07.944 "trsvcid": "54406" 00:22:07.944 }, 00:22:07.944 "auth": { 00:22:07.944 "state": "completed", 00:22:07.944 "digest": "sha256", 00:22:07.944 "dhgroup": "ffdhe4096" 00:22:07.944 } 00:22:07.944 } 00:22:07.944 ]' 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.944 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.205 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:22:08.775 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.037 19:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.298 00:22:09.299 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.299 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.299 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.559 { 00:22:09.559 "cntlid": 27, 00:22:09.559 "qid": 0, 00:22:09.559 "state": "enabled", 00:22:09.559 "thread": "nvmf_tgt_poll_group_000", 00:22:09.559 "listen_address": { 00:22:09.559 "trtype": "TCP", 00:22:09.559 "adrfam": "IPv4", 00:22:09.559 "traddr": "10.0.0.2", 00:22:09.559 "trsvcid": "4420" 00:22:09.559 }, 00:22:09.559 "peer_address": { 00:22:09.559 "trtype": "TCP", 00:22:09.559 "adrfam": "IPv4", 00:22:09.559 "traddr": "10.0.0.1", 00:22:09.559 "trsvcid": "54442" 00:22:09.559 }, 00:22:09.559 "auth": { 00:22:09.559 "state": "completed", 00:22:09.559 "digest": "sha256", 00:22:09.559 "dhgroup": "ffdhe4096" 00:22:09.559 } 00:22:09.559 } 00:22:09.559 ]' 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.559 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.821 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.821 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.821 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.821 19:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.763 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.024 00:22:11.024 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.024 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.024 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.285 { 00:22:11.285 "cntlid": 29, 00:22:11.285 "qid": 0, 00:22:11.285 "state": "enabled", 00:22:11.285 "thread": "nvmf_tgt_poll_group_000", 00:22:11.285 "listen_address": { 00:22:11.285 "trtype": "TCP", 00:22:11.285 "adrfam": "IPv4", 00:22:11.285 "traddr": "10.0.0.2", 00:22:11.285 "trsvcid": "4420" 00:22:11.285 }, 00:22:11.285 "peer_address": { 00:22:11.285 "trtype": "TCP", 00:22:11.285 "adrfam": "IPv4", 00:22:11.285 "traddr": "10.0.0.1", 00:22:11.285 "trsvcid": "54480" 00:22:11.285 }, 00:22:11.285 "auth": { 00:22:11.285 "state": "completed", 00:22:11.285 "digest": "sha256", 00:22:11.285 "dhgroup": "ffdhe4096" 00:22:11.285 } 00:22:11.285 } 00:22:11.285 ]' 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:11.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.546 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.546 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.546 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.546 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.488 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.749 00:22:12.749 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.749 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.749 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.010 { 00:22:13.010 "cntlid": 31, 00:22:13.010 "qid": 0, 00:22:13.010 "state": "enabled", 00:22:13.010 "thread": "nvmf_tgt_poll_group_000", 00:22:13.010 "listen_address": { 00:22:13.010 "trtype": "TCP", 00:22:13.010 "adrfam": "IPv4", 00:22:13.010 "traddr": "10.0.0.2", 00:22:13.010 "trsvcid": "4420" 00:22:13.010 }, 00:22:13.010 "peer_address": { 00:22:13.010 "trtype": "TCP", 00:22:13.010 "adrfam": "IPv4", 00:22:13.010 "traddr": "10.0.0.1", 00:22:13.010 "trsvcid": "54528" 00:22:13.010 }, 00:22:13.010 "auth": { 00:22:13.010 "state": "completed", 00:22:13.010 "digest": "sha256", 00:22:13.010 "dhgroup": "ffdhe4096" 00:22:13.010 } 00:22:13.010 } 00:22:13.010 ]' 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.010 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.271 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.214 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.214 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.214 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.214 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.473 00:22:14.473 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.473 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.473 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.734 { 00:22:14.734 "cntlid": 33, 00:22:14.734 "qid": 0, 00:22:14.734 "state": "enabled", 00:22:14.734 "thread": "nvmf_tgt_poll_group_000", 00:22:14.734 "listen_address": { 00:22:14.734 "trtype": "TCP", 00:22:14.734 "adrfam": "IPv4", 00:22:14.734 "traddr": "10.0.0.2", 00:22:14.734 "trsvcid": "4420" 00:22:14.734 }, 00:22:14.734 "peer_address": { 00:22:14.734 "trtype": "TCP", 00:22:14.734 "adrfam": "IPv4", 00:22:14.734 "traddr": "10.0.0.1", 00:22:14.734 "trsvcid": "54550" 00:22:14.734 }, 00:22:14.734 "auth": { 00:22:14.734 "state": "completed", 00:22:14.734 "digest": "sha256", 00:22:14.734 "dhgroup": "ffdhe6144" 00:22:14.734 } 00:22:14.734 } 00:22:14.734 ]' 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.734 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.995 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:22:15.567 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.567 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:15.567 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.567 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.828 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.089 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.351 { 00:22:16.351 "cntlid": 35, 00:22:16.351 "qid": 0, 00:22:16.351 "state": "enabled", 00:22:16.351 "thread": "nvmf_tgt_poll_group_000", 00:22:16.351 "listen_address": { 00:22:16.351 "trtype": "TCP", 00:22:16.351 "adrfam": "IPv4", 00:22:16.351 "traddr": "10.0.0.2", 00:22:16.351 "trsvcid": "4420" 00:22:16.351 }, 00:22:16.351 "peer_address": { 00:22:16.351 "trtype": "TCP", 00:22:16.351 "adrfam": "IPv4", 00:22:16.351 "traddr": "10.0.0.1", 00:22:16.351 "trsvcid": "57886" 00:22:16.351 }, 00:22:16.351 "auth": { 00:22:16.351 "state": "completed", 00:22:16.351 "digest": "sha256", 00:22:16.351 "dhgroup": "ffdhe6144" 00:22:16.351 } 00:22:16.351 } 00:22:16.351 ]' 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:16.351 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.612 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:16.612 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.612 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.612 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.612 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.612 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.554 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.813 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.074 { 00:22:18.074 "cntlid": 37, 00:22:18.074 "qid": 0, 00:22:18.074 "state": "enabled", 00:22:18.074 "thread": "nvmf_tgt_poll_group_000", 00:22:18.074 "listen_address": { 00:22:18.074 "trtype": "TCP", 00:22:18.074 "adrfam": "IPv4", 00:22:18.074 "traddr": "10.0.0.2", 00:22:18.074 "trsvcid": "4420" 00:22:18.074 }, 00:22:18.074 "peer_address": { 00:22:18.074 "trtype": "TCP", 00:22:18.074 "adrfam": "IPv4", 00:22:18.074 "traddr": "10.0.0.1", 00:22:18.074 "trsvcid": "57916" 00:22:18.074 }, 00:22:18.074 "auth": { 00:22:18.074 "state": "completed", 00:22:18.074 "digest": "sha256", 00:22:18.074 "dhgroup": "ffdhe6144" 00:22:18.074 } 00:22:18.074 } 00:22:18.074 ]' 00:22:18.074 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.074 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:18.074 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.334 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:18.334 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.334 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.334 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.334 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.334 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:22:19.273 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.274 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.844 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.844 { 00:22:19.844 "cntlid": 39, 00:22:19.844 "qid": 0, 00:22:19.844 "state": "enabled", 00:22:19.844 "thread": "nvmf_tgt_poll_group_000", 00:22:19.844 "listen_address": { 00:22:19.844 "trtype": "TCP", 00:22:19.844 "adrfam": "IPv4", 00:22:19.844 "traddr": "10.0.0.2", 00:22:19.844 "trsvcid": "4420" 00:22:19.844 }, 00:22:19.844 "peer_address": { 00:22:19.844 "trtype": "TCP", 00:22:19.844 "adrfam": "IPv4", 00:22:19.844 "traddr": "10.0.0.1", 00:22:19.844 "trsvcid": "57948" 00:22:19.844 }, 00:22:19.844 "auth": { 00:22:19.844 "state": "completed", 00:22:19.844 "digest": "sha256", 00:22:19.844 "dhgroup": "ffdhe6144" 00:22:19.844 } 00:22:19.844 } 00:22:19.844 ]' 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.844 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:20.104 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.104 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:20.105 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.105 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.105 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.105 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.105 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.046 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.655 00:22:21.655 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.655 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.655 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.915 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.915 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.915 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.915 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.915 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.915 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.915 { 00:22:21.915 "cntlid": 41, 00:22:21.915 "qid": 0, 00:22:21.915 "state": "enabled", 00:22:21.915 "thread": "nvmf_tgt_poll_group_000", 00:22:21.915 "listen_address": { 00:22:21.915 "trtype": "TCP", 00:22:21.915 "adrfam": "IPv4", 00:22:21.915 "traddr": "10.0.0.2", 00:22:21.915 "trsvcid": "4420" 00:22:21.915 }, 00:22:21.916 "peer_address": { 00:22:21.916 "trtype": "TCP", 00:22:21.916 "adrfam": "IPv4", 00:22:21.916 "traddr": "10.0.0.1", 00:22:21.916 "trsvcid": "57974" 00:22:21.916 }, 00:22:21.916 "auth": { 00:22:21.916 "state": "completed", 00:22:21.916 "digest": "sha256", 00:22:21.916 "dhgroup": "ffdhe8192" 00:22:21.916 } 00:22:21.916 } 00:22:21.916 ]' 00:22:21.916 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.916 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:21.916 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.916 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.916 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.916 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.916 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.916 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.176 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.117 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.689 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.689 { 00:22:23.689 "cntlid": 43, 00:22:23.689 "qid": 0, 00:22:23.689 "state": "enabled", 00:22:23.689 "thread": "nvmf_tgt_poll_group_000", 00:22:23.689 "listen_address": { 00:22:23.689 "trtype": "TCP", 00:22:23.689 "adrfam": "IPv4", 00:22:23.689 "traddr": "10.0.0.2", 00:22:23.689 "trsvcid": "4420" 00:22:23.689 }, 00:22:23.689 "peer_address": { 00:22:23.689 "trtype": "TCP", 00:22:23.689 "adrfam": "IPv4", 00:22:23.689 "traddr": "10.0.0.1", 00:22:23.689 "trsvcid": "57994" 00:22:23.689 }, 00:22:23.689 "auth": { 00:22:23.689 "state": "completed", 00:22:23.689 "digest": "sha256", 00:22:23.689 "dhgroup": "ffdhe8192" 00:22:23.689 } 00:22:23.689 } 00:22:23.689 ]' 00:22:23.689 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.951 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:23.951 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.951 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.951 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.951 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.951 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.951 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.211 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:22:24.782 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.782 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.782 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.782 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.782 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.782 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.782 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:24.782 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.043 19:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.614 00:22:25.614 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.614 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.614 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.875 { 00:22:25.875 "cntlid": 45, 00:22:25.875 "qid": 0, 00:22:25.875 "state": "enabled", 00:22:25.875 "thread": "nvmf_tgt_poll_group_000", 00:22:25.875 "listen_address": { 00:22:25.875 "trtype": "TCP", 00:22:25.875 "adrfam": "IPv4", 00:22:25.875 "traddr": "10.0.0.2", 00:22:25.875 "trsvcid": "4420" 00:22:25.875 }, 00:22:25.875 "peer_address": { 00:22:25.875 "trtype": "TCP", 00:22:25.875 "adrfam": "IPv4", 00:22:25.875 "traddr": "10.0.0.1", 00:22:25.875 "trsvcid": "33638" 00:22:25.875 }, 00:22:25.875 "auth": { 00:22:25.875 "state": "completed", 00:22:25.875 "digest": "sha256", 00:22:25.875 "dhgroup": "ffdhe8192" 00:22:25.875 } 00:22:25.875 } 00:22:25.875 ]' 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.875 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.136 19:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:22:26.707 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.707 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.707 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.707 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.707 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.707 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.707 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:26.707 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.969 19:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.541 00:22:27.541 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.541 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.541 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.801 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.801 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.801 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.801 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.801 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.801 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.801 { 00:22:27.801 "cntlid": 47, 00:22:27.801 "qid": 0, 00:22:27.801 "state": "enabled", 00:22:27.801 "thread": "nvmf_tgt_poll_group_000", 00:22:27.801 "listen_address": { 00:22:27.801 "trtype": "TCP", 00:22:27.801 "adrfam": "IPv4", 00:22:27.801 "traddr": "10.0.0.2", 00:22:27.801 "trsvcid": "4420" 00:22:27.801 }, 00:22:27.801 "peer_address": { 00:22:27.801 "trtype": "TCP", 00:22:27.802 "adrfam": "IPv4", 00:22:27.802 "traddr": "10.0.0.1", 00:22:27.802 "trsvcid": "33664" 00:22:27.802 }, 00:22:27.802 "auth": { 00:22:27.802 "state": "completed", 00:22:27.802 "digest": "sha256", 00:22:27.802 "dhgroup": "ffdhe8192" 00:22:27.802 } 00:22:27.802 } 00:22:27.802 ]' 00:22:27.802 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.802 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:27.802 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.802 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.802 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.802 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.802 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.802 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.062 19:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:28.633 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.894 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.155 00:22:29.155 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.155 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.155 19:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.155 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.155 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.155 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.155 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.416 { 00:22:29.416 "cntlid": 49, 00:22:29.416 "qid": 0, 00:22:29.416 "state": "enabled", 00:22:29.416 "thread": "nvmf_tgt_poll_group_000", 00:22:29.416 "listen_address": { 00:22:29.416 "trtype": "TCP", 00:22:29.416 "adrfam": "IPv4", 00:22:29.416 "traddr": "10.0.0.2", 00:22:29.416 "trsvcid": "4420" 00:22:29.416 }, 00:22:29.416 "peer_address": { 00:22:29.416 "trtype": "TCP", 00:22:29.416 "adrfam": "IPv4", 00:22:29.416 "traddr": "10.0.0.1", 00:22:29.416 "trsvcid": "33678" 00:22:29.416 }, 00:22:29.416 "auth": { 00:22:29.416 "state": "completed", 00:22:29.416 "digest": "sha384", 00:22:29.416 "dhgroup": "null" 00:22:29.416 } 00:22:29.416 } 00:22:29.416 ]' 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.416 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.676 19:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:22:30.247 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.247 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:30.247 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.247 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.247 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.247 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:30.248 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:30.248 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.508 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.769 00:22:30.769 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.769 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.769 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:31.030 { 00:22:31.030 "cntlid": 51, 00:22:31.030 "qid": 0, 00:22:31.030 "state": "enabled", 00:22:31.030 "thread": "nvmf_tgt_poll_group_000", 00:22:31.030 "listen_address": { 00:22:31.030 "trtype": "TCP", 00:22:31.030 "adrfam": "IPv4", 00:22:31.030 "traddr": "10.0.0.2", 00:22:31.030 "trsvcid": "4420" 00:22:31.030 }, 00:22:31.030 "peer_address": { 00:22:31.030 "trtype": "TCP", 00:22:31.030 "adrfam": "IPv4", 00:22:31.030 "traddr": "10.0.0.1", 00:22:31.030 "trsvcid": "33706" 00:22:31.030 }, 00:22:31.030 "auth": { 00:22:31.030 "state": "completed", 00:22:31.030 "digest": "sha384", 00:22:31.030 "dhgroup": "null" 00:22:31.030 } 00:22:31.030 } 00:22:31.030 ]' 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.030 19:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.291 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:22:31.862 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.124 19:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.385 00:22:32.385 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.385 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.385 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.646 { 00:22:32.646 "cntlid": 53, 00:22:32.646 "qid": 0, 00:22:32.646 "state": "enabled", 00:22:32.646 "thread": "nvmf_tgt_poll_group_000", 00:22:32.646 "listen_address": { 00:22:32.646 "trtype": "TCP", 00:22:32.646 "adrfam": "IPv4", 00:22:32.646 "traddr": "10.0.0.2", 00:22:32.646 "trsvcid": "4420" 00:22:32.646 }, 00:22:32.646 "peer_address": { 00:22:32.646 "trtype": "TCP", 00:22:32.646 "adrfam": "IPv4", 00:22:32.646 "traddr": "10.0.0.1", 00:22:32.646 "trsvcid": "33742" 00:22:32.646 }, 00:22:32.646 "auth": { 00:22:32.646 "state": "completed", 00:22:32.646 "digest": "sha384", 00:22:32.646 "dhgroup": "null" 00:22:32.646 } 00:22:32.646 } 00:22:32.646 ]' 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.646 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.907 19:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:22:33.850 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.850 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.850 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.851 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.111 00:22:34.111 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.111 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.111 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.111 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.112 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.112 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.112 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.112 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.112 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:34.112 { 00:22:34.112 "cntlid": 55, 00:22:34.112 "qid": 0, 00:22:34.112 "state": "enabled", 00:22:34.112 "thread": "nvmf_tgt_poll_group_000", 00:22:34.112 "listen_address": { 00:22:34.112 "trtype": "TCP", 00:22:34.112 "adrfam": "IPv4", 00:22:34.112 "traddr": "10.0.0.2", 00:22:34.112 "trsvcid": "4420" 00:22:34.112 }, 00:22:34.112 "peer_address": { 00:22:34.112 "trtype": "TCP", 00:22:34.112 "adrfam": "IPv4", 00:22:34.112 "traddr": "10.0.0.1", 00:22:34.112 "trsvcid": "33778" 00:22:34.112 }, 00:22:34.112 "auth": { 00:22:34.112 "state": "completed", 00:22:34.112 "digest": "sha384", 00:22:34.112 "dhgroup": "null" 00:22:34.112 } 00:22:34.112 } 00:22:34.112 ]' 00:22:34.112 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.372 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.372 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.372 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:34.373 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.373 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.373 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.373 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.633 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:22:35.204 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.204 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:35.204 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.204 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.204 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.204 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.204 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.204 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:35.204 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.466 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.727 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.727 { 00:22:35.727 "cntlid": 57, 00:22:35.727 "qid": 0, 00:22:35.727 "state": "enabled", 00:22:35.727 "thread": "nvmf_tgt_poll_group_000", 00:22:35.727 "listen_address": { 00:22:35.727 "trtype": "TCP", 00:22:35.727 "adrfam": "IPv4", 00:22:35.727 "traddr": "10.0.0.2", 00:22:35.727 "trsvcid": "4420" 00:22:35.727 }, 00:22:35.727 "peer_address": { 00:22:35.727 "trtype": "TCP", 00:22:35.727 "adrfam": "IPv4", 00:22:35.727 "traddr": "10.0.0.1", 00:22:35.727 "trsvcid": "47006" 00:22:35.727 }, 00:22:35.727 "auth": { 00:22:35.727 "state": "completed", 00:22:35.727 "digest": "sha384", 00:22:35.727 "dhgroup": "ffdhe2048" 00:22:35.727 } 00:22:35.727 } 00:22:35.727 ]' 00:22:35.727 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.989 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.989 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.989 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:35.989 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.989 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.989 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.989 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.989 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:22:36.932 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.932 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.932 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.932 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.932 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.932 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:36.932 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:36.932 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:37.193 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:22:37.193 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:37.193 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:37.193 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:37.193 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:37.193 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.193 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.194 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.194 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.194 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.194 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.194 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.454 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.454 { 00:22:37.454 "cntlid": 59, 00:22:37.454 "qid": 0, 00:22:37.454 "state": "enabled", 00:22:37.454 "thread": "nvmf_tgt_poll_group_000", 00:22:37.454 "listen_address": { 00:22:37.454 "trtype": "TCP", 00:22:37.454 "adrfam": "IPv4", 00:22:37.454 "traddr": "10.0.0.2", 00:22:37.454 "trsvcid": "4420" 00:22:37.454 }, 00:22:37.454 "peer_address": { 00:22:37.454 "trtype": "TCP", 00:22:37.454 "adrfam": "IPv4", 00:22:37.454 "traddr": "10.0.0.1", 00:22:37.454 "trsvcid": "47046" 00:22:37.454 }, 00:22:37.454 "auth": { 00:22:37.454 "state": "completed", 00:22:37.454 "digest": "sha384", 00:22:37.454 "dhgroup": "ffdhe2048" 00:22:37.454 } 00:22:37.454 } 00:22:37.454 ]' 00:22:37.454 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.455 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:37.455 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.716 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:37.716 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.716 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.716 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.717 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.717 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.679 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.964 00:22:38.964 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.964 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.964 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.225 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.225 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.225 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.225 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.225 { 00:22:39.225 "cntlid": 61, 00:22:39.225 "qid": 0, 00:22:39.225 "state": "enabled", 00:22:39.225 "thread": "nvmf_tgt_poll_group_000", 00:22:39.225 "listen_address": { 00:22:39.225 "trtype": "TCP", 00:22:39.225 "adrfam": "IPv4", 00:22:39.225 "traddr": "10.0.0.2", 00:22:39.225 "trsvcid": "4420" 00:22:39.225 }, 00:22:39.225 "peer_address": { 00:22:39.225 "trtype": "TCP", 00:22:39.225 "adrfam": "IPv4", 00:22:39.225 "traddr": "10.0.0.1", 00:22:39.225 "trsvcid": "47074" 00:22:39.225 }, 00:22:39.225 "auth": { 00:22:39.225 "state": "completed", 00:22:39.225 "digest": "sha384", 00:22:39.225 "dhgroup": "ffdhe2048" 00:22:39.225 } 00:22:39.225 } 00:22:39.225 ]' 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.225 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.486 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.429 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.690 00:22:40.690 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.690 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.690 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.690 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.690 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.690 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.690 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.690 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.690 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.690 { 00:22:40.690 "cntlid": 63, 00:22:40.690 "qid": 0, 00:22:40.690 "state": "enabled", 00:22:40.690 "thread": "nvmf_tgt_poll_group_000", 00:22:40.690 "listen_address": { 00:22:40.690 "trtype": "TCP", 00:22:40.690 "adrfam": "IPv4", 00:22:40.690 "traddr": "10.0.0.2", 00:22:40.690 "trsvcid": "4420" 00:22:40.690 }, 00:22:40.690 "peer_address": { 00:22:40.690 "trtype": "TCP", 00:22:40.690 "adrfam": "IPv4", 00:22:40.690 "traddr": "10.0.0.1", 00:22:40.690 "trsvcid": "47102" 00:22:40.690 }, 00:22:40.690 "auth": { 00:22:40.690 "state": "completed", 00:22:40.690 "digest": "sha384", 00:22:40.690 "dhgroup": "ffdhe2048" 00:22:40.690 } 00:22:40.690 } 00:22:40.690 ]' 00:22:40.950 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.950 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.950 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:40.950 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:40.950 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.950 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.950 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.950 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.211 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:22:41.783 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.783 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:41.783 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.783 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.783 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.783 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.783 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:41.783 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.783 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.045 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.306 00:22:42.306 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.306 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.306 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.306 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.306 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.306 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.306 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.306 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.306 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:42.306 { 00:22:42.306 "cntlid": 65, 00:22:42.306 "qid": 0, 00:22:42.306 "state": "enabled", 00:22:42.306 "thread": "nvmf_tgt_poll_group_000", 00:22:42.306 "listen_address": { 00:22:42.306 "trtype": "TCP", 00:22:42.306 "adrfam": "IPv4", 00:22:42.306 "traddr": "10.0.0.2", 00:22:42.306 "trsvcid": "4420" 00:22:42.306 }, 00:22:42.306 "peer_address": { 00:22:42.306 "trtype": "TCP", 00:22:42.306 "adrfam": "IPv4", 00:22:42.306 "traddr": "10.0.0.1", 00:22:42.306 "trsvcid": "47136" 00:22:42.306 }, 00:22:42.306 "auth": { 00:22:42.306 "state": "completed", 00:22:42.306 "digest": "sha384", 00:22:42.307 "dhgroup": "ffdhe3072" 00:22:42.307 } 00:22:42.307 } 00:22:42.307 ]' 00:22:42.307 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:42.567 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:42.567 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:42.567 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:42.567 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:42.567 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.567 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.567 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.828 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:22:43.398 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.398 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.398 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.398 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.398 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.398 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:43.398 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:43.398 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.658 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.918 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.918 { 00:22:43.918 "cntlid": 67, 00:22:43.918 "qid": 0, 00:22:43.918 "state": "enabled", 00:22:43.918 "thread": "nvmf_tgt_poll_group_000", 00:22:43.918 "listen_address": { 00:22:43.918 "trtype": "TCP", 00:22:43.918 "adrfam": "IPv4", 00:22:43.918 "traddr": "10.0.0.2", 00:22:43.918 "trsvcid": "4420" 00:22:43.918 }, 00:22:43.918 "peer_address": { 00:22:43.918 "trtype": "TCP", 00:22:43.918 "adrfam": "IPv4", 00:22:43.918 "traddr": "10.0.0.1", 00:22:43.918 "trsvcid": "47166" 00:22:43.918 }, 00:22:43.918 "auth": { 00:22:43.918 "state": "completed", 00:22:43.918 "digest": "sha384", 00:22:43.918 "dhgroup": "ffdhe3072" 00:22:43.918 } 00:22:43.918 } 00:22:43.918 ]' 00:22:43.918 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:44.179 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:44.179 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:44.179 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:44.179 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:44.179 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.179 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.179 19:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.179 19:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:22:45.125 19:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.125 19:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:45.125 19:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.125 19:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.125 19:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.125 19:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.125 19:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:45.125 19:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:45.125 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:22:45.125 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.125 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:45.125 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:45.125 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:45.125 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.125 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.125 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.125 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.385 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.385 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.385 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.385 00:22:45.645 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.645 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.645 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.645 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.645 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.645 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.645 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.646 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.646 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.646 { 00:22:45.646 "cntlid": 69, 00:22:45.646 "qid": 0, 00:22:45.646 "state": "enabled", 00:22:45.646 "thread": "nvmf_tgt_poll_group_000", 00:22:45.646 "listen_address": { 00:22:45.646 "trtype": "TCP", 00:22:45.646 "adrfam": "IPv4", 00:22:45.646 "traddr": "10.0.0.2", 00:22:45.646 "trsvcid": "4420" 00:22:45.646 }, 00:22:45.646 "peer_address": { 00:22:45.646 "trtype": "TCP", 00:22:45.646 "adrfam": "IPv4", 00:22:45.646 "traddr": "10.0.0.1", 00:22:45.646 "trsvcid": "54928" 00:22:45.646 }, 00:22:45.646 "auth": { 00:22:45.646 "state": "completed", 00:22:45.646 "digest": "sha384", 00:22:45.646 "dhgroup": "ffdhe3072" 00:22:45.646 } 00:22:45.646 } 00:22:45.646 ]' 00:22:45.646 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.646 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:45.646 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.905 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:45.905 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.905 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.905 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.905 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.905 19:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:46.848 19:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.110 00:22:47.110 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.110 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.110 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:47.371 { 00:22:47.371 "cntlid": 71, 00:22:47.371 "qid": 0, 00:22:47.371 "state": "enabled", 00:22:47.371 "thread": "nvmf_tgt_poll_group_000", 00:22:47.371 "listen_address": { 00:22:47.371 "trtype": "TCP", 00:22:47.371 "adrfam": "IPv4", 00:22:47.371 "traddr": "10.0.0.2", 00:22:47.371 "trsvcid": "4420" 00:22:47.371 }, 00:22:47.371 "peer_address": { 00:22:47.371 "trtype": "TCP", 00:22:47.371 "adrfam": "IPv4", 00:22:47.371 "traddr": "10.0.0.1", 00:22:47.371 "trsvcid": "54948" 00:22:47.371 }, 00:22:47.371 "auth": { 00:22:47.371 "state": "completed", 00:22:47.371 "digest": "sha384", 00:22:47.371 "dhgroup": "ffdhe3072" 00:22:47.371 } 00:22:47.371 } 00:22:47.371 ]' 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:47.371 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.372 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.372 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.633 19:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.576 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.836 00:22:48.836 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:48.836 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:48.836 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.097 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.097 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.097 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.097 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.097 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.097 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.097 { 00:22:49.097 "cntlid": 73, 00:22:49.097 "qid": 0, 00:22:49.097 "state": "enabled", 00:22:49.097 "thread": "nvmf_tgt_poll_group_000", 00:22:49.097 "listen_address": { 00:22:49.097 "trtype": "TCP", 00:22:49.097 "adrfam": "IPv4", 00:22:49.097 "traddr": "10.0.0.2", 00:22:49.097 "trsvcid": "4420" 00:22:49.097 }, 00:22:49.097 "peer_address": { 00:22:49.097 "trtype": "TCP", 00:22:49.097 "adrfam": "IPv4", 00:22:49.097 "traddr": "10.0.0.1", 00:22:49.097 "trsvcid": "54968" 00:22:49.097 }, 00:22:49.097 "auth": { 00:22:49.098 "state": "completed", 00:22:49.098 "digest": "sha384", 00:22:49.098 "dhgroup": "ffdhe4096" 00:22:49.098 } 00:22:49.098 } 00:22:49.098 ]' 00:22:49.098 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:49.098 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:49.098 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:49.098 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:49.098 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:49.098 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.098 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.098 19:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.358 19:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:22:50.301 19:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.301 19:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.301 19:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.301 19:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.301 19:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.301 19:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:50.301 19:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:50.301 19:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.301 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.302 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.563 00:22:50.563 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.563 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.563 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.824 { 00:22:50.824 "cntlid": 75, 00:22:50.824 "qid": 0, 00:22:50.824 "state": "enabled", 00:22:50.824 "thread": "nvmf_tgt_poll_group_000", 00:22:50.824 "listen_address": { 00:22:50.824 "trtype": "TCP", 00:22:50.824 "adrfam": "IPv4", 00:22:50.824 "traddr": "10.0.0.2", 00:22:50.824 "trsvcid": "4420" 00:22:50.824 }, 00:22:50.824 "peer_address": { 00:22:50.824 "trtype": "TCP", 00:22:50.824 "adrfam": "IPv4", 00:22:50.824 "traddr": "10.0.0.1", 00:22:50.824 "trsvcid": "54992" 00:22:50.824 }, 00:22:50.824 "auth": { 00:22:50.824 "state": "completed", 00:22:50.824 "digest": "sha384", 00:22:50.824 "dhgroup": "ffdhe4096" 00:22:50.824 } 00:22:50.824 } 00:22:50.824 ]' 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.824 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.085 19:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:22:51.657 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.657 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:51.657 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.657 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.657 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.657 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:51.657 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:51.657 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.917 19:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.177 00:22:52.177 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.177 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.177 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.436 { 00:22:52.436 "cntlid": 77, 00:22:52.436 "qid": 0, 00:22:52.436 "state": "enabled", 00:22:52.436 "thread": "nvmf_tgt_poll_group_000", 00:22:52.436 "listen_address": { 00:22:52.436 "trtype": "TCP", 00:22:52.436 "adrfam": "IPv4", 00:22:52.436 "traddr": "10.0.0.2", 00:22:52.436 "trsvcid": "4420" 00:22:52.436 }, 00:22:52.436 "peer_address": { 00:22:52.436 "trtype": "TCP", 00:22:52.436 "adrfam": "IPv4", 00:22:52.436 "traddr": "10.0.0.1", 00:22:52.436 "trsvcid": "55008" 00:22:52.436 }, 00:22:52.436 "auth": { 00:22:52.436 "state": "completed", 00:22:52.436 "digest": "sha384", 00:22:52.436 "dhgroup": "ffdhe4096" 00:22:52.436 } 00:22:52.436 } 00:22:52.436 ]' 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.436 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.696 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:22:53.636 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.636 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.636 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.636 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.636 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.637 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.897 00:22:53.897 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:53.897 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:53.897 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:54.158 { 00:22:54.158 "cntlid": 79, 00:22:54.158 "qid": 0, 00:22:54.158 "state": "enabled", 00:22:54.158 "thread": "nvmf_tgt_poll_group_000", 00:22:54.158 "listen_address": { 00:22:54.158 "trtype": "TCP", 00:22:54.158 "adrfam": "IPv4", 00:22:54.158 "traddr": "10.0.0.2", 00:22:54.158 "trsvcid": "4420" 00:22:54.158 }, 00:22:54.158 "peer_address": { 00:22:54.158 "trtype": "TCP", 00:22:54.158 "adrfam": "IPv4", 00:22:54.158 "traddr": "10.0.0.1", 00:22:54.158 "trsvcid": "55030" 00:22:54.158 }, 00:22:54.158 "auth": { 00:22:54.158 "state": "completed", 00:22:54.158 "digest": "sha384", 00:22:54.158 "dhgroup": "ffdhe4096" 00:22:54.158 } 00:22:54.158 } 00:22:54.158 ]' 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:54.158 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:54.158 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:54.158 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.158 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.158 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.418 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:22:55.364 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.364 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.364 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.364 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.364 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.364 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.364 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.364 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:55.364 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.364 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.666 00:22:55.666 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.666 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.666 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.926 { 00:22:55.926 "cntlid": 81, 00:22:55.926 "qid": 0, 00:22:55.926 "state": "enabled", 00:22:55.926 "thread": "nvmf_tgt_poll_group_000", 00:22:55.926 "listen_address": { 00:22:55.926 "trtype": "TCP", 00:22:55.926 "adrfam": "IPv4", 00:22:55.926 "traddr": "10.0.0.2", 00:22:55.926 "trsvcid": "4420" 00:22:55.926 }, 00:22:55.926 "peer_address": { 00:22:55.926 "trtype": "TCP", 00:22:55.926 "adrfam": "IPv4", 00:22:55.926 "traddr": "10.0.0.1", 00:22:55.926 "trsvcid": "60242" 00:22:55.926 }, 00:22:55.926 "auth": { 00:22:55.926 "state": "completed", 00:22:55.926 "digest": "sha384", 00:22:55.926 "dhgroup": "ffdhe6144" 00:22:55.926 } 00:22:55.926 } 00:22:55.926 ]' 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.926 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.187 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:22:56.757 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.017 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.587 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:57.587 { 00:22:57.587 "cntlid": 83, 00:22:57.587 "qid": 0, 00:22:57.587 "state": "enabled", 00:22:57.587 "thread": "nvmf_tgt_poll_group_000", 00:22:57.587 "listen_address": { 00:22:57.587 "trtype": "TCP", 00:22:57.587 "adrfam": "IPv4", 00:22:57.587 "traddr": "10.0.0.2", 00:22:57.587 "trsvcid": "4420" 00:22:57.587 }, 00:22:57.587 "peer_address": { 00:22:57.587 "trtype": "TCP", 00:22:57.587 "adrfam": "IPv4", 00:22:57.587 "traddr": "10.0.0.1", 00:22:57.587 "trsvcid": "60266" 00:22:57.587 }, 00:22:57.587 "auth": { 00:22:57.587 "state": "completed", 00:22:57.587 "digest": "sha384", 00:22:57.587 "dhgroup": "ffdhe6144" 00:22:57.587 } 00:22:57.587 } 00:22:57.587 ]' 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:57.587 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:57.848 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.848 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.848 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.848 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:22:58.791 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.791 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.791 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.791 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.791 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.791 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:58.791 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:58.791 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.052 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.312 00:22:59.312 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.312 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.312 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.573 { 00:22:59.573 "cntlid": 85, 00:22:59.573 "qid": 0, 00:22:59.573 "state": "enabled", 00:22:59.573 "thread": "nvmf_tgt_poll_group_000", 00:22:59.573 "listen_address": { 00:22:59.573 "trtype": "TCP", 00:22:59.573 "adrfam": "IPv4", 00:22:59.573 "traddr": "10.0.0.2", 00:22:59.573 "trsvcid": "4420" 00:22:59.573 }, 00:22:59.573 "peer_address": { 00:22:59.573 "trtype": "TCP", 00:22:59.573 "adrfam": "IPv4", 00:22:59.573 "traddr": "10.0.0.1", 00:22:59.573 "trsvcid": "60298" 00:22:59.573 }, 00:22:59.573 "auth": { 00:22:59.573 "state": "completed", 00:22:59.573 "digest": "sha384", 00:22:59.573 "dhgroup": "ffdhe6144" 00:22:59.573 } 00:22:59.573 } 00:22:59.573 ]' 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.573 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.834 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:23:00.406 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:00.667 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:00.927 00:23:01.188 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:01.188 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:01.188 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.188 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.188 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.188 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.188 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.188 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.188 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.188 { 00:23:01.188 "cntlid": 87, 00:23:01.188 "qid": 0, 00:23:01.188 "state": "enabled", 00:23:01.188 "thread": "nvmf_tgt_poll_group_000", 00:23:01.188 "listen_address": { 00:23:01.188 "trtype": "TCP", 00:23:01.188 "adrfam": "IPv4", 00:23:01.188 "traddr": "10.0.0.2", 00:23:01.188 "trsvcid": "4420" 00:23:01.188 }, 00:23:01.188 "peer_address": { 00:23:01.188 "trtype": "TCP", 00:23:01.188 "adrfam": "IPv4", 00:23:01.188 "traddr": "10.0.0.1", 00:23:01.188 "trsvcid": "60318" 00:23:01.188 }, 00:23:01.188 "auth": { 00:23:01.188 "state": "completed", 00:23:01.188 "digest": "sha384", 00:23:01.188 "dhgroup": "ffdhe6144" 00:23:01.188 } 00:23:01.188 } 00:23:01.188 ]' 00:23:01.188 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:01.188 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:01.188 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.448 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:01.448 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.448 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.448 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.449 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.449 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.390 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.962 00:23:02.962 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:02.962 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.962 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.222 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.222 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.222 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.222 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.222 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.222 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:03.222 { 00:23:03.222 "cntlid": 89, 00:23:03.222 "qid": 0, 00:23:03.222 "state": "enabled", 00:23:03.222 "thread": "nvmf_tgt_poll_group_000", 00:23:03.222 "listen_address": { 00:23:03.222 "trtype": "TCP", 00:23:03.222 "adrfam": "IPv4", 00:23:03.222 "traddr": "10.0.0.2", 00:23:03.222 "trsvcid": "4420" 00:23:03.222 }, 00:23:03.222 "peer_address": { 00:23:03.222 "trtype": "TCP", 00:23:03.222 "adrfam": "IPv4", 00:23:03.222 "traddr": "10.0.0.1", 00:23:03.222 "trsvcid": "60344" 00:23:03.222 }, 00:23:03.222 "auth": { 00:23:03.222 "state": "completed", 00:23:03.222 "digest": "sha384", 00:23:03.222 "dhgroup": "ffdhe8192" 00:23:03.222 } 00:23:03.222 } 00:23:03.222 ]' 00:23:03.222 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:03.223 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:03.223 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:03.223 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:03.223 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:03.223 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.223 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.223 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.483 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:23:04.425 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.426 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.997 00:23:04.997 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.997 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.997 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.258 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.258 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.258 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.258 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.258 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.258 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:05.258 { 00:23:05.258 "cntlid": 91, 00:23:05.258 "qid": 0, 00:23:05.258 "state": "enabled", 00:23:05.258 "thread": "nvmf_tgt_poll_group_000", 00:23:05.258 "listen_address": { 00:23:05.258 "trtype": "TCP", 00:23:05.258 "adrfam": "IPv4", 00:23:05.258 "traddr": "10.0.0.2", 00:23:05.258 "trsvcid": "4420" 00:23:05.258 }, 00:23:05.258 "peer_address": { 00:23:05.258 "trtype": "TCP", 00:23:05.258 "adrfam": "IPv4", 00:23:05.258 "traddr": "10.0.0.1", 00:23:05.258 "trsvcid": "60358" 00:23:05.258 }, 00:23:05.258 "auth": { 00:23:05.258 "state": "completed", 00:23:05.258 "digest": "sha384", 00:23:05.258 "dhgroup": "ffdhe8192" 00:23:05.258 } 00:23:05.258 } 00:23:05.258 ]' 00:23:05.258 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:05.258 19:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:05.258 19:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:05.258 19:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:05.258 19:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:05.258 19:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.258 19:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.258 19:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.519 19:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:23:06.090 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.090 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:06.090 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.090 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.352 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.924 00:23:06.924 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:06.924 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.924 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:07.186 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.186 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.186 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.186 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.186 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.186 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.186 { 00:23:07.186 "cntlid": 93, 00:23:07.186 "qid": 0, 00:23:07.186 "state": "enabled", 00:23:07.186 "thread": "nvmf_tgt_poll_group_000", 00:23:07.186 "listen_address": { 00:23:07.186 "trtype": "TCP", 00:23:07.186 "adrfam": "IPv4", 00:23:07.186 "traddr": "10.0.0.2", 00:23:07.186 "trsvcid": "4420" 00:23:07.186 }, 00:23:07.186 "peer_address": { 00:23:07.186 "trtype": "TCP", 00:23:07.186 "adrfam": "IPv4", 00:23:07.186 "traddr": "10.0.0.1", 00:23:07.186 "trsvcid": "34956" 00:23:07.186 }, 00:23:07.186 "auth": { 00:23:07.186 "state": "completed", 00:23:07.186 "digest": "sha384", 00:23:07.186 "dhgroup": "ffdhe8192" 00:23:07.186 } 00:23:07.186 } 00:23:07.186 ]' 00:23:07.186 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.186 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:07.186 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:07.186 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:07.186 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:07.186 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.186 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.186 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.446 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:23:08.018 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.018 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:08.018 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.018 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.018 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.018 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.018 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:08.018 19:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.278 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.850 00:23:08.850 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.850 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.850 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:09.111 { 00:23:09.111 "cntlid": 95, 00:23:09.111 "qid": 0, 00:23:09.111 "state": "enabled", 00:23:09.111 "thread": "nvmf_tgt_poll_group_000", 00:23:09.111 "listen_address": { 00:23:09.111 "trtype": "TCP", 00:23:09.111 "adrfam": "IPv4", 00:23:09.111 "traddr": "10.0.0.2", 00:23:09.111 "trsvcid": "4420" 00:23:09.111 }, 00:23:09.111 "peer_address": { 00:23:09.111 "trtype": "TCP", 00:23:09.111 "adrfam": "IPv4", 00:23:09.111 "traddr": "10.0.0.1", 00:23:09.111 "trsvcid": "34980" 00:23:09.111 }, 00:23:09.111 "auth": { 00:23:09.111 "state": "completed", 00:23:09.111 "digest": "sha384", 00:23:09.111 "dhgroup": "ffdhe8192" 00:23:09.111 } 00:23:09.111 } 00:23:09.111 ]' 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.111 19:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.372 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:23:09.944 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.944 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.944 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.205 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.205 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.205 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:10.205 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.205 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:10.205 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:10.205 19:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.205 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.466 00:23:10.466 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:10.466 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.466 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:10.726 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.726 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.726 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.726 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:10.727 { 00:23:10.727 "cntlid": 97, 00:23:10.727 "qid": 0, 00:23:10.727 "state": "enabled", 00:23:10.727 "thread": "nvmf_tgt_poll_group_000", 00:23:10.727 "listen_address": { 00:23:10.727 "trtype": "TCP", 00:23:10.727 "adrfam": "IPv4", 00:23:10.727 "traddr": "10.0.0.2", 00:23:10.727 "trsvcid": "4420" 00:23:10.727 }, 00:23:10.727 "peer_address": { 00:23:10.727 "trtype": "TCP", 00:23:10.727 "adrfam": "IPv4", 00:23:10.727 "traddr": "10.0.0.1", 00:23:10.727 "trsvcid": "35020" 00:23:10.727 }, 00:23:10.727 "auth": { 00:23:10.727 "state": "completed", 00:23:10.727 "digest": "sha512", 00:23:10.727 "dhgroup": "null" 00:23:10.727 } 00:23:10.727 } 00:23:10.727 ]' 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.727 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.987 19:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.930 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.190 00:23:12.190 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:12.190 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:12.191 19:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.191 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.191 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.191 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.191 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.191 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.191 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:12.191 { 00:23:12.191 "cntlid": 99, 00:23:12.191 "qid": 0, 00:23:12.191 "state": "enabled", 00:23:12.191 "thread": "nvmf_tgt_poll_group_000", 00:23:12.191 "listen_address": { 00:23:12.191 "trtype": "TCP", 00:23:12.191 "adrfam": "IPv4", 00:23:12.191 "traddr": "10.0.0.2", 00:23:12.191 "trsvcid": "4420" 00:23:12.191 }, 00:23:12.191 "peer_address": { 00:23:12.191 "trtype": "TCP", 00:23:12.191 "adrfam": "IPv4", 00:23:12.191 "traddr": "10.0.0.1", 00:23:12.191 "trsvcid": "35046" 00:23:12.191 }, 00:23:12.191 "auth": { 00:23:12.191 "state": "completed", 00:23:12.191 "digest": "sha512", 00:23:12.191 "dhgroup": "null" 00:23:12.191 } 00:23:12.191 } 00:23:12.191 ]' 00:23:12.191 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:12.451 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.451 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:12.451 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:12.451 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:12.451 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.451 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.451 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.713 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:23:13.318 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.318 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.318 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.318 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.318 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.318 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.318 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:13.318 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:13.578 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:23:13.578 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.579 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.840 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.840 { 00:23:13.840 "cntlid": 101, 00:23:13.840 "qid": 0, 00:23:13.840 "state": "enabled", 00:23:13.840 "thread": "nvmf_tgt_poll_group_000", 00:23:13.840 "listen_address": { 00:23:13.840 "trtype": "TCP", 00:23:13.840 "adrfam": "IPv4", 00:23:13.840 "traddr": "10.0.0.2", 00:23:13.840 "trsvcid": "4420" 00:23:13.840 }, 00:23:13.840 "peer_address": { 00:23:13.840 "trtype": "TCP", 00:23:13.840 "adrfam": "IPv4", 00:23:13.840 "traddr": "10.0.0.1", 00:23:13.840 "trsvcid": "35066" 00:23:13.840 }, 00:23:13.840 "auth": { 00:23:13.840 "state": "completed", 00:23:13.840 "digest": "sha512", 00:23:13.840 "dhgroup": "null" 00:23:13.840 } 00:23:13.840 } 00:23:13.840 ]' 00:23:13.840 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:14.101 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.101 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:14.101 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:14.101 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:14.101 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.101 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.101 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.362 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:23:14.932 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.932 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:14.932 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.932 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.932 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.932 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.932 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:14.932 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.194 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.455 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.455 { 00:23:15.455 "cntlid": 103, 00:23:15.455 "qid": 0, 00:23:15.455 "state": "enabled", 00:23:15.455 "thread": "nvmf_tgt_poll_group_000", 00:23:15.455 "listen_address": { 00:23:15.455 "trtype": "TCP", 00:23:15.455 "adrfam": "IPv4", 00:23:15.455 "traddr": "10.0.0.2", 00:23:15.455 "trsvcid": "4420" 00:23:15.455 }, 00:23:15.455 "peer_address": { 00:23:15.455 "trtype": "TCP", 00:23:15.455 "adrfam": "IPv4", 00:23:15.455 "traddr": "10.0.0.1", 00:23:15.455 "trsvcid": "34814" 00:23:15.455 }, 00:23:15.455 "auth": { 00:23:15.455 "state": "completed", 00:23:15.455 "digest": "sha512", 00:23:15.455 "dhgroup": "null" 00:23:15.455 } 00:23:15.455 } 00:23:15.455 ]' 00:23:15.455 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.716 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.716 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.716 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:15.716 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.716 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.716 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.716 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.977 19:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:23:16.550 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.550 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.550 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.550 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.550 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.550 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.550 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.550 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:16.550 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.813 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.074 00:23:17.074 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:17.074 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:17.074 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.074 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.074 19:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.074 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.074 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.074 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.074 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.074 { 00:23:17.074 "cntlid": 105, 00:23:17.074 "qid": 0, 00:23:17.074 "state": "enabled", 00:23:17.074 "thread": "nvmf_tgt_poll_group_000", 00:23:17.074 "listen_address": { 00:23:17.074 "trtype": "TCP", 00:23:17.074 "adrfam": "IPv4", 00:23:17.074 "traddr": "10.0.0.2", 00:23:17.074 "trsvcid": "4420" 00:23:17.074 }, 00:23:17.074 "peer_address": { 00:23:17.074 "trtype": "TCP", 00:23:17.074 "adrfam": "IPv4", 00:23:17.074 "traddr": "10.0.0.1", 00:23:17.074 "trsvcid": "34842" 00:23:17.074 }, 00:23:17.074 "auth": { 00:23:17.074 "state": "completed", 00:23:17.074 "digest": "sha512", 00:23:17.074 "dhgroup": "ffdhe2048" 00:23:17.074 } 00:23:17.074 } 00:23:17.074 ]' 00:23:17.074 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.335 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.335 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.335 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:17.335 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.335 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.335 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.335 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.597 19:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:23:18.170 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.170 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.170 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.170 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.170 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.170 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:18.170 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:18.170 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.431 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.692 00:23:18.693 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:18.693 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.693 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:19.028 { 00:23:19.028 "cntlid": 107, 00:23:19.028 "qid": 0, 00:23:19.028 "state": "enabled", 00:23:19.028 "thread": "nvmf_tgt_poll_group_000", 00:23:19.028 "listen_address": { 00:23:19.028 "trtype": "TCP", 00:23:19.028 "adrfam": "IPv4", 00:23:19.028 "traddr": "10.0.0.2", 00:23:19.028 "trsvcid": "4420" 00:23:19.028 }, 00:23:19.028 "peer_address": { 00:23:19.028 "trtype": "TCP", 00:23:19.028 "adrfam": "IPv4", 00:23:19.028 "traddr": "10.0.0.1", 00:23:19.028 "trsvcid": "34856" 00:23:19.028 }, 00:23:19.028 "auth": { 00:23:19.028 "state": "completed", 00:23:19.028 "digest": "sha512", 00:23:19.028 "dhgroup": "ffdhe2048" 00:23:19.028 } 00:23:19.028 } 00:23:19.028 ]' 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.028 19:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.000 19:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.261 00:23:20.261 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:20.261 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:20.261 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:20.523 { 00:23:20.523 "cntlid": 109, 00:23:20.523 "qid": 0, 00:23:20.523 "state": "enabled", 00:23:20.523 "thread": "nvmf_tgt_poll_group_000", 00:23:20.523 "listen_address": { 00:23:20.523 "trtype": "TCP", 00:23:20.523 "adrfam": "IPv4", 00:23:20.523 "traddr": "10.0.0.2", 00:23:20.523 "trsvcid": "4420" 00:23:20.523 }, 00:23:20.523 "peer_address": { 00:23:20.523 "trtype": "TCP", 00:23:20.523 "adrfam": "IPv4", 00:23:20.523 "traddr": "10.0.0.1", 00:23:20.523 "trsvcid": "34870" 00:23:20.523 }, 00:23:20.523 "auth": { 00:23:20.523 "state": "completed", 00:23:20.523 "digest": "sha512", 00:23:20.523 "dhgroup": "ffdhe2048" 00:23:20.523 } 00:23:20.523 } 00:23:20.523 ]' 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:20.523 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.785 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.785 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.785 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.785 19:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:21.728 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:21.989 00:23:21.989 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:21.989 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:21.989 19:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.250 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.250 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.250 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.251 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.251 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.251 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:22.251 { 00:23:22.251 "cntlid": 111, 00:23:22.251 "qid": 0, 00:23:22.251 "state": "enabled", 00:23:22.251 "thread": "nvmf_tgt_poll_group_000", 00:23:22.251 "listen_address": { 00:23:22.251 "trtype": "TCP", 00:23:22.251 "adrfam": "IPv4", 00:23:22.251 "traddr": "10.0.0.2", 00:23:22.251 "trsvcid": "4420" 00:23:22.251 }, 00:23:22.251 "peer_address": { 00:23:22.251 "trtype": "TCP", 00:23:22.251 "adrfam": "IPv4", 00:23:22.251 "traddr": "10.0.0.1", 00:23:22.251 "trsvcid": "34898" 00:23:22.251 }, 00:23:22.251 "auth": { 00:23:22.251 "state": "completed", 00:23:22.251 "digest": "sha512", 00:23:22.251 "dhgroup": "ffdhe2048" 00:23:22.251 } 00:23:22.251 } 00:23:22.251 ]' 00:23:22.251 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:22.251 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.251 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:22.251 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:22.251 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:22.251 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.511 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.511 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.511 19:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.452 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.712 00:23:23.712 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:23.712 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:23.712 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:23.973 { 00:23:23.973 "cntlid": 113, 00:23:23.973 "qid": 0, 00:23:23.973 "state": "enabled", 00:23:23.973 "thread": "nvmf_tgt_poll_group_000", 00:23:23.973 "listen_address": { 00:23:23.973 "trtype": "TCP", 00:23:23.973 "adrfam": "IPv4", 00:23:23.973 "traddr": "10.0.0.2", 00:23:23.973 "trsvcid": "4420" 00:23:23.973 }, 00:23:23.973 "peer_address": { 00:23:23.973 "trtype": "TCP", 00:23:23.973 "adrfam": "IPv4", 00:23:23.973 "traddr": "10.0.0.1", 00:23:23.973 "trsvcid": "34904" 00:23:23.973 }, 00:23:23.973 "auth": { 00:23:23.973 "state": "completed", 00:23:23.973 "digest": "sha512", 00:23:23.973 "dhgroup": "ffdhe3072" 00:23:23.973 } 00:23:23.973 } 00:23:23.973 ]' 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.973 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.974 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.234 19:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:23:24.805 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.067 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.328 00:23:25.328 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:25.328 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:25.328 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.588 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.588 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.588 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.588 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.588 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.588 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:25.589 { 00:23:25.589 "cntlid": 115, 00:23:25.589 "qid": 0, 00:23:25.589 "state": "enabled", 00:23:25.589 "thread": "nvmf_tgt_poll_group_000", 00:23:25.589 "listen_address": { 00:23:25.589 "trtype": "TCP", 00:23:25.589 "adrfam": "IPv4", 00:23:25.589 "traddr": "10.0.0.2", 00:23:25.589 "trsvcid": "4420" 00:23:25.589 }, 00:23:25.589 "peer_address": { 00:23:25.589 "trtype": "TCP", 00:23:25.589 "adrfam": "IPv4", 00:23:25.589 "traddr": "10.0.0.1", 00:23:25.589 "trsvcid": "36962" 00:23:25.589 }, 00:23:25.589 "auth": { 00:23:25.589 "state": "completed", 00:23:25.589 "digest": "sha512", 00:23:25.589 "dhgroup": "ffdhe3072" 00:23:25.589 } 00:23:25.589 } 00:23:25.589 ]' 00:23:25.589 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:25.589 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:25.589 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:25.589 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:25.589 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:25.589 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.589 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.589 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.849 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.792 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.052 00:23:27.052 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:27.052 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:27.052 19:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.317 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.317 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.317 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.317 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.317 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.317 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:27.317 { 00:23:27.317 "cntlid": 117, 00:23:27.317 "qid": 0, 00:23:27.317 "state": "enabled", 00:23:27.317 "thread": "nvmf_tgt_poll_group_000", 00:23:27.317 "listen_address": { 00:23:27.317 "trtype": "TCP", 00:23:27.317 "adrfam": "IPv4", 00:23:27.317 "traddr": "10.0.0.2", 00:23:27.317 "trsvcid": "4420" 00:23:27.317 }, 00:23:27.317 "peer_address": { 00:23:27.317 "trtype": "TCP", 00:23:27.317 "adrfam": "IPv4", 00:23:27.317 "traddr": "10.0.0.1", 00:23:27.317 "trsvcid": "36980" 00:23:27.317 }, 00:23:27.317 "auth": { 00:23:27.317 "state": "completed", 00:23:27.317 "digest": "sha512", 00:23:27.318 "dhgroup": "ffdhe3072" 00:23:27.318 } 00:23:27.318 } 00:23:27.318 ]' 00:23:27.318 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:27.318 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:27.318 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:27.318 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:27.318 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:27.318 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.318 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.318 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.579 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:23:28.151 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.151 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.151 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.151 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.412 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.679 00:23:28.679 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:28.679 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:28.679 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:28.947 { 00:23:28.947 "cntlid": 119, 00:23:28.947 "qid": 0, 00:23:28.947 "state": "enabled", 00:23:28.947 "thread": "nvmf_tgt_poll_group_000", 00:23:28.947 "listen_address": { 00:23:28.947 "trtype": "TCP", 00:23:28.947 "adrfam": "IPv4", 00:23:28.947 "traddr": "10.0.0.2", 00:23:28.947 "trsvcid": "4420" 00:23:28.947 }, 00:23:28.947 "peer_address": { 00:23:28.947 "trtype": "TCP", 00:23:28.947 "adrfam": "IPv4", 00:23:28.947 "traddr": "10.0.0.1", 00:23:28.947 "trsvcid": "37004" 00:23:28.947 }, 00:23:28.947 "auth": { 00:23:28.947 "state": "completed", 00:23:28.947 "digest": "sha512", 00:23:28.947 "dhgroup": "ffdhe3072" 00:23:28.947 } 00:23:28.947 } 00:23:28.947 ]' 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.947 19:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.208 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.151 19:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.412 00:23:30.412 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:30.412 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:30.412 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:30.673 { 00:23:30.673 "cntlid": 121, 00:23:30.673 "qid": 0, 00:23:30.673 "state": "enabled", 00:23:30.673 "thread": "nvmf_tgt_poll_group_000", 00:23:30.673 "listen_address": { 00:23:30.673 "trtype": "TCP", 00:23:30.673 "adrfam": "IPv4", 00:23:30.673 "traddr": "10.0.0.2", 00:23:30.673 "trsvcid": "4420" 00:23:30.673 }, 00:23:30.673 "peer_address": { 00:23:30.673 "trtype": "TCP", 00:23:30.673 "adrfam": "IPv4", 00:23:30.673 "traddr": "10.0.0.1", 00:23:30.673 "trsvcid": "37036" 00:23:30.673 }, 00:23:30.673 "auth": { 00:23:30.673 "state": "completed", 00:23:30.673 "digest": "sha512", 00:23:30.673 "dhgroup": "ffdhe4096" 00:23:30.673 } 00:23:30.673 } 00:23:30.673 ]' 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.673 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.933 19:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:23:31.504 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.765 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.025 00:23:32.025 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:32.025 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:32.026 19:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:32.285 { 00:23:32.285 "cntlid": 123, 00:23:32.285 "qid": 0, 00:23:32.285 "state": "enabled", 00:23:32.285 "thread": "nvmf_tgt_poll_group_000", 00:23:32.285 "listen_address": { 00:23:32.285 "trtype": "TCP", 00:23:32.285 "adrfam": "IPv4", 00:23:32.285 "traddr": "10.0.0.2", 00:23:32.285 "trsvcid": "4420" 00:23:32.285 }, 00:23:32.285 "peer_address": { 00:23:32.285 "trtype": "TCP", 00:23:32.285 "adrfam": "IPv4", 00:23:32.285 "traddr": "10.0.0.1", 00:23:32.285 "trsvcid": "37054" 00:23:32.285 }, 00:23:32.285 "auth": { 00:23:32.285 "state": "completed", 00:23:32.285 "digest": "sha512", 00:23:32.285 "dhgroup": "ffdhe4096" 00:23:32.285 } 00:23:32.285 } 00:23:32.285 ]' 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.285 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.545 19:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.487 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.748 00:23:33.748 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:33.748 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:33.748 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.008 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.008 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.008 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.008 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.008 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.008 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:34.008 { 00:23:34.009 "cntlid": 125, 00:23:34.009 "qid": 0, 00:23:34.009 "state": "enabled", 00:23:34.009 "thread": "nvmf_tgt_poll_group_000", 00:23:34.009 "listen_address": { 00:23:34.009 "trtype": "TCP", 00:23:34.009 "adrfam": "IPv4", 00:23:34.009 "traddr": "10.0.0.2", 00:23:34.009 "trsvcid": "4420" 00:23:34.009 }, 00:23:34.009 "peer_address": { 00:23:34.009 "trtype": "TCP", 00:23:34.009 "adrfam": "IPv4", 00:23:34.009 "traddr": "10.0.0.1", 00:23:34.009 "trsvcid": "37082" 00:23:34.009 }, 00:23:34.009 "auth": { 00:23:34.009 "state": "completed", 00:23:34.009 "digest": "sha512", 00:23:34.009 "dhgroup": "ffdhe4096" 00:23:34.009 } 00:23:34.009 } 00:23:34.009 ]' 00:23:34.009 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:34.009 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:34.009 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:34.009 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:34.009 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:34.009 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.009 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.009 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.269 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:35.209 19:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:35.470 00:23:35.470 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:35.470 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:35.470 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.470 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.470 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:35.470 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.470 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:35.730 { 00:23:35.730 "cntlid": 127, 00:23:35.730 "qid": 0, 00:23:35.730 "state": "enabled", 00:23:35.730 "thread": "nvmf_tgt_poll_group_000", 00:23:35.730 "listen_address": { 00:23:35.730 "trtype": "TCP", 00:23:35.730 "adrfam": "IPv4", 00:23:35.730 "traddr": "10.0.0.2", 00:23:35.730 "trsvcid": "4420" 00:23:35.730 }, 00:23:35.730 "peer_address": { 00:23:35.730 "trtype": "TCP", 00:23:35.730 "adrfam": "IPv4", 00:23:35.730 "traddr": "10.0.0.1", 00:23:35.730 "trsvcid": "40196" 00:23:35.730 }, 00:23:35.730 "auth": { 00:23:35.730 "state": "completed", 00:23:35.730 "digest": "sha512", 00:23:35.730 "dhgroup": "ffdhe4096" 00:23:35.730 } 00:23:35.730 } 00:23:35.730 ]' 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:35.730 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.991 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:23:36.561 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:36.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:36.561 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.561 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.561 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.561 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.561 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.561 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:36.561 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:36.561 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.821 19:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.081 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:37.342 { 00:23:37.342 "cntlid": 129, 00:23:37.342 "qid": 0, 00:23:37.342 "state": "enabled", 00:23:37.342 "thread": "nvmf_tgt_poll_group_000", 00:23:37.342 "listen_address": { 00:23:37.342 "trtype": "TCP", 00:23:37.342 "adrfam": "IPv4", 00:23:37.342 "traddr": "10.0.0.2", 00:23:37.342 "trsvcid": "4420" 00:23:37.342 }, 00:23:37.342 "peer_address": { 00:23:37.342 "trtype": "TCP", 00:23:37.342 "adrfam": "IPv4", 00:23:37.342 "traddr": "10.0.0.1", 00:23:37.342 "trsvcid": "40220" 00:23:37.342 }, 00:23:37.342 "auth": { 00:23:37.342 "state": "completed", 00:23:37.342 "digest": "sha512", 00:23:37.342 "dhgroup": "ffdhe6144" 00:23:37.342 } 00:23:37.342 } 00:23:37.342 ]' 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:37.342 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:37.603 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:37.603 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:37.603 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:37.603 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.603 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.603 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.545 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.116 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:39.116 { 00:23:39.116 "cntlid": 131, 00:23:39.116 "qid": 0, 00:23:39.116 "state": "enabled", 00:23:39.116 "thread": "nvmf_tgt_poll_group_000", 00:23:39.116 "listen_address": { 00:23:39.116 "trtype": "TCP", 00:23:39.116 "adrfam": "IPv4", 00:23:39.116 "traddr": "10.0.0.2", 00:23:39.116 "trsvcid": "4420" 00:23:39.116 }, 00:23:39.116 "peer_address": { 00:23:39.116 "trtype": "TCP", 00:23:39.116 "adrfam": "IPv4", 00:23:39.116 "traddr": "10.0.0.1", 00:23:39.116 "trsvcid": "40246" 00:23:39.116 }, 00:23:39.116 "auth": { 00:23:39.116 "state": "completed", 00:23:39.116 "digest": "sha512", 00:23:39.116 "dhgroup": "ffdhe6144" 00:23:39.116 } 00:23:39.116 } 00:23:39.116 ]' 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:39.116 19:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:39.116 19:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:39.116 19:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:39.116 19:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:39.376 19:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.376 19:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.376 19:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.376 19:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:23:40.319 19:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.319 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.579 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:40.840 { 00:23:40.840 "cntlid": 133, 00:23:40.840 "qid": 0, 00:23:40.840 "state": "enabled", 00:23:40.840 "thread": "nvmf_tgt_poll_group_000", 00:23:40.840 "listen_address": { 00:23:40.840 "trtype": "TCP", 00:23:40.840 "adrfam": "IPv4", 00:23:40.840 "traddr": "10.0.0.2", 00:23:40.840 "trsvcid": "4420" 00:23:40.840 }, 00:23:40.840 "peer_address": { 00:23:40.840 "trtype": "TCP", 00:23:40.840 "adrfam": "IPv4", 00:23:40.840 "traddr": "10.0.0.1", 00:23:40.840 "trsvcid": "40266" 00:23:40.840 }, 00:23:40.840 "auth": { 00:23:40.840 "state": "completed", 00:23:40.840 "digest": "sha512", 00:23:40.840 "dhgroup": "ffdhe6144" 00:23:40.840 } 00:23:40.840 } 00:23:40.840 ]' 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:40.840 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:41.101 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:41.101 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:41.101 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.101 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.101 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.101 19:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.042 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.043 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.043 19:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.614 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:42.614 { 00:23:42.614 "cntlid": 135, 00:23:42.614 "qid": 0, 00:23:42.614 "state": "enabled", 00:23:42.614 "thread": "nvmf_tgt_poll_group_000", 00:23:42.614 "listen_address": { 00:23:42.614 "trtype": "TCP", 00:23:42.614 "adrfam": "IPv4", 00:23:42.614 "traddr": "10.0.0.2", 00:23:42.614 "trsvcid": "4420" 00:23:42.614 }, 00:23:42.614 "peer_address": { 00:23:42.614 "trtype": "TCP", 00:23:42.614 "adrfam": "IPv4", 00:23:42.614 "traddr": "10.0.0.1", 00:23:42.614 "trsvcid": "40294" 00:23:42.614 }, 00:23:42.614 "auth": { 00:23:42.614 "state": "completed", 00:23:42.614 "digest": "sha512", 00:23:42.614 "dhgroup": "ffdhe6144" 00:23:42.614 } 00:23:42.614 } 00:23:42.614 ]' 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:42.614 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:42.875 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:42.875 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.875 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.875 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.875 19:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.818 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.390 00:23:44.390 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:44.390 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:44.390 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:44.650 { 00:23:44.650 "cntlid": 137, 00:23:44.650 "qid": 0, 00:23:44.650 "state": "enabled", 00:23:44.650 "thread": "nvmf_tgt_poll_group_000", 00:23:44.650 "listen_address": { 00:23:44.650 "trtype": "TCP", 00:23:44.650 "adrfam": "IPv4", 00:23:44.650 "traddr": "10.0.0.2", 00:23:44.650 "trsvcid": "4420" 00:23:44.650 }, 00:23:44.650 "peer_address": { 00:23:44.650 "trtype": "TCP", 00:23:44.650 "adrfam": "IPv4", 00:23:44.650 "traddr": "10.0.0.1", 00:23:44.650 "trsvcid": "40322" 00:23:44.650 }, 00:23:44.650 "auth": { 00:23:44.650 "state": "completed", 00:23:44.650 "digest": "sha512", 00:23:44.650 "dhgroup": "ffdhe8192" 00:23:44.650 } 00:23:44.650 } 00:23:44.650 ]' 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.650 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.910 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:23:45.482 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:45.743 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:45.744 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:45.744 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.744 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.744 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.744 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.744 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.744 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.744 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.315 00:23:46.315 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:46.315 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.315 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:46.576 { 00:23:46.576 "cntlid": 139, 00:23:46.576 "qid": 0, 00:23:46.576 "state": "enabled", 00:23:46.576 "thread": "nvmf_tgt_poll_group_000", 00:23:46.576 "listen_address": { 00:23:46.576 "trtype": "TCP", 00:23:46.576 "adrfam": "IPv4", 00:23:46.576 "traddr": "10.0.0.2", 00:23:46.576 "trsvcid": "4420" 00:23:46.576 }, 00:23:46.576 "peer_address": { 00:23:46.576 "trtype": "TCP", 00:23:46.576 "adrfam": "IPv4", 00:23:46.576 "traddr": "10.0.0.1", 00:23:46.576 "trsvcid": "34824" 00:23:46.576 }, 00:23:46.576 "auth": { 00:23:46.576 "state": "completed", 00:23:46.576 "digest": "sha512", 00:23:46.576 "dhgroup": "ffdhe8192" 00:23:46.576 } 00:23:46.576 } 00:23:46.576 ]' 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.576 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.837 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OWFhYTFkZDEyNWE1MTA0NTY0MjMzZGQ1YWFlOWQwZjfJurJ9: --dhchap-ctrl-secret DHHC-1:02:YTJjMmNjM2QyYTlhMGViM2M1MDk5ZDlhZDI5MmQ4MGVhOWUzZDJmNjBiODMwMWNh4XQ7Vw==: 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.778 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.779 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.779 19:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.350 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:48.350 { 00:23:48.350 "cntlid": 141, 00:23:48.350 "qid": 0, 00:23:48.350 "state": "enabled", 00:23:48.350 "thread": "nvmf_tgt_poll_group_000", 00:23:48.350 "listen_address": { 00:23:48.350 "trtype": "TCP", 00:23:48.350 "adrfam": "IPv4", 00:23:48.350 "traddr": "10.0.0.2", 00:23:48.350 "trsvcid": "4420" 00:23:48.350 }, 00:23:48.350 "peer_address": { 00:23:48.350 "trtype": "TCP", 00:23:48.350 "adrfam": "IPv4", 00:23:48.350 "traddr": "10.0.0.1", 00:23:48.350 "trsvcid": "34850" 00:23:48.350 }, 00:23:48.350 "auth": { 00:23:48.350 "state": "completed", 00:23:48.350 "digest": "sha512", 00:23:48.350 "dhgroup": "ffdhe8192" 00:23:48.350 } 00:23:48.350 } 00:23:48.350 ]' 00:23:48.350 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:48.654 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:48.654 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:48.654 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:48.654 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:48.654 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.654 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.654 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.654 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YThjM2FmYjBmMWU4NmY1MzQ2YmJlNGNlY2Y4NTQ5N2Y4YWNhMzEyOTYwYTdhNzEyGhSlbQ==: --dhchap-ctrl-secret DHHC-1:01:ZjZlYWQxNjdjMTJhMDllNzAwMDBiODk4ZTc5ZGU3MDF7ROz1: 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:49.610 19:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:50.181 00:23:50.181 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:50.181 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:50.181 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:50.442 { 00:23:50.442 "cntlid": 143, 00:23:50.442 "qid": 0, 00:23:50.442 "state": "enabled", 00:23:50.442 "thread": "nvmf_tgt_poll_group_000", 00:23:50.442 "listen_address": { 00:23:50.442 "trtype": "TCP", 00:23:50.442 "adrfam": "IPv4", 00:23:50.442 "traddr": "10.0.0.2", 00:23:50.442 "trsvcid": "4420" 00:23:50.442 }, 00:23:50.442 "peer_address": { 00:23:50.442 "trtype": "TCP", 00:23:50.442 "adrfam": "IPv4", 00:23:50.442 "traddr": "10.0.0.1", 00:23:50.442 "trsvcid": "34866" 00:23:50.442 }, 00:23:50.442 "auth": { 00:23:50.442 "state": "completed", 00:23:50.442 "digest": "sha512", 00:23:50.442 "dhgroup": "ffdhe8192" 00:23:50.442 } 00:23:50.442 } 00:23:50.442 ]' 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.442 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.443 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.703 19:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:23:51.645 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.645 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.646 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.226 00:23:52.226 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:52.226 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:52.226 19:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.226 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.226 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.226 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.226 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.226 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.226 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:52.226 { 00:23:52.226 "cntlid": 145, 00:23:52.226 "qid": 0, 00:23:52.226 "state": "enabled", 00:23:52.226 "thread": "nvmf_tgt_poll_group_000", 00:23:52.226 "listen_address": { 00:23:52.226 "trtype": "TCP", 00:23:52.226 "adrfam": "IPv4", 00:23:52.226 "traddr": "10.0.0.2", 00:23:52.226 "trsvcid": "4420" 00:23:52.226 }, 00:23:52.226 "peer_address": { 00:23:52.226 "trtype": "TCP", 00:23:52.226 "adrfam": "IPv4", 00:23:52.226 "traddr": "10.0.0.1", 00:23:52.226 "trsvcid": "34906" 00:23:52.226 }, 00:23:52.226 "auth": { 00:23:52.226 "state": "completed", 00:23:52.226 "digest": "sha512", 00:23:52.226 "dhgroup": "ffdhe8192" 00:23:52.226 } 00:23:52.226 } 00:23:52.226 ]' 00:23:52.226 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:52.487 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.487 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:52.487 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:52.487 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:52.487 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.487 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.487 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.748 19:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YTZkODcyNjBjMTY0Y2E0ZjZhMTNjYzc4ZDRjN2Y5MzljZmRjNjI1NGU5NjU1MDdhTBzdBg==: --dhchap-ctrl-secret DHHC-1:03:ZTRmY2MwZWY3MWM4N2ZhMmI2MzM5NDYyNTQzMDAyOWE3MGUyOTk2YjFkMDkyNTc1NTk1YTUyNTZiN2VhMDBkMHqBYY4=: 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:53.319 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:53.890 request: 00:23:53.890 { 00:23:53.890 "name": "nvme0", 00:23:53.890 "trtype": "tcp", 00:23:53.890 "traddr": "10.0.0.2", 00:23:53.890 "adrfam": "ipv4", 00:23:53.890 "trsvcid": "4420", 00:23:53.890 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:53.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:53.890 "prchk_reftag": false, 00:23:53.890 "prchk_guard": false, 00:23:53.890 "hdgst": false, 00:23:53.890 "ddgst": false, 00:23:53.890 "dhchap_key": "key2", 00:23:53.890 "method": "bdev_nvme_attach_controller", 00:23:53.890 "req_id": 1 00:23:53.890 } 00:23:53.890 Got JSON-RPC error response 00:23:53.891 response: 00:23:53.891 { 00:23:53.891 "code": -5, 00:23:53.891 "message": "Input/output error" 00:23:53.891 } 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.891 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:54.463 request: 00:23:54.463 { 00:23:54.463 "name": "nvme0", 00:23:54.463 "trtype": "tcp", 00:23:54.463 "traddr": "10.0.0.2", 00:23:54.463 "adrfam": "ipv4", 00:23:54.463 "trsvcid": "4420", 00:23:54.463 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:54.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:54.463 "prchk_reftag": false, 00:23:54.463 "prchk_guard": false, 00:23:54.463 "hdgst": false, 00:23:54.463 "ddgst": false, 00:23:54.463 "dhchap_key": "key1", 00:23:54.463 "dhchap_ctrlr_key": "ckey2", 00:23:54.463 "method": "bdev_nvme_attach_controller", 00:23:54.463 "req_id": 1 00:23:54.463 } 00:23:54.463 Got JSON-RPC error response 00:23:54.463 response: 00:23:54.463 { 00:23:54.463 "code": -5, 00:23:54.463 "message": "Input/output error" 00:23:54.463 } 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.463 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.035 request: 00:23:55.035 { 00:23:55.035 "name": "nvme0", 00:23:55.035 "trtype": "tcp", 00:23:55.035 "traddr": "10.0.0.2", 00:23:55.035 "adrfam": "ipv4", 00:23:55.035 "trsvcid": "4420", 00:23:55.035 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:55.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:55.035 "prchk_reftag": false, 00:23:55.035 "prchk_guard": false, 00:23:55.035 "hdgst": false, 00:23:55.035 "ddgst": false, 00:23:55.035 "dhchap_key": "key1", 00:23:55.035 "dhchap_ctrlr_key": "ckey1", 00:23:55.035 "method": "bdev_nvme_attach_controller", 00:23:55.035 "req_id": 1 00:23:55.035 } 00:23:55.035 Got JSON-RPC error response 00:23:55.035 response: 00:23:55.035 { 00:23:55.035 "code": -5, 00:23:55.035 "message": "Input/output error" 00:23:55.035 } 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2919985 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2919985 ']' 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2919985 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2919985 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2919985' 00:23:55.035 killing process with pid 2919985 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2919985 00:23:55.035 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2919985 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2946919 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2946919 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2946919 ']' 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.977 19:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2946919 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2946919 ']' 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.920 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.181 19:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.181 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.181 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:57.181 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:57.752 00:23:57.752 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:57.752 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:57.752 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.752 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.752 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.752 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.752 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:58.013 { 00:23:58.013 "cntlid": 1, 00:23:58.013 "qid": 0, 00:23:58.013 "state": "enabled", 00:23:58.013 "thread": "nvmf_tgt_poll_group_000", 00:23:58.013 "listen_address": { 00:23:58.013 "trtype": "TCP", 00:23:58.013 "adrfam": "IPv4", 00:23:58.013 "traddr": "10.0.0.2", 00:23:58.013 "trsvcid": "4420" 00:23:58.013 }, 00:23:58.013 "peer_address": { 00:23:58.013 "trtype": "TCP", 00:23:58.013 "adrfam": "IPv4", 00:23:58.013 "traddr": "10.0.0.1", 00:23:58.013 "trsvcid": "39330" 00:23:58.013 }, 00:23:58.013 "auth": { 00:23:58.013 "state": "completed", 00:23:58.013 "digest": "sha512", 00:23:58.013 "dhgroup": "ffdhe8192" 00:23:58.013 } 00:23:58.013 } 00:23:58.013 ]' 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.013 19:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.274 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZTczZTUyYzgzNDkzYjAyNTk5ZDlhMWEwNzM5MTYwNjNkZGI5Yjg5NmI0YmUxY2U2Zjc3YTQxZGQ5Mjk0YTBiZiTZQm0=: 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:58.846 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:59.107 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.107 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:59.107 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.107 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:59.107 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.107 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:59.107 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.107 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.107 19:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.367 request: 00:23:59.367 { 00:23:59.367 "name": "nvme0", 00:23:59.368 "trtype": "tcp", 00:23:59.368 "traddr": "10.0.0.2", 00:23:59.368 "adrfam": "ipv4", 00:23:59.368 "trsvcid": "4420", 00:23:59.368 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:59.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:59.368 "prchk_reftag": false, 00:23:59.368 "prchk_guard": false, 00:23:59.368 "hdgst": false, 00:23:59.368 "ddgst": false, 00:23:59.368 "dhchap_key": "key3", 00:23:59.368 "method": "bdev_nvme_attach_controller", 00:23:59.368 "req_id": 1 00:23:59.368 } 00:23:59.368 Got JSON-RPC error response 00:23:59.368 response: 00:23:59.368 { 00:23:59.368 "code": -5, 00:23:59.368 "message": "Input/output error" 00:23:59.368 } 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.368 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.628 request: 00:23:59.628 { 00:23:59.628 "name": "nvme0", 00:23:59.628 "trtype": "tcp", 00:23:59.628 "traddr": "10.0.0.2", 00:23:59.628 "adrfam": "ipv4", 00:23:59.628 "trsvcid": "4420", 00:23:59.628 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:59.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:59.628 "prchk_reftag": false, 00:23:59.628 "prchk_guard": false, 00:23:59.628 "hdgst": false, 00:23:59.628 "ddgst": false, 00:23:59.628 "dhchap_key": "key3", 00:23:59.628 "method": "bdev_nvme_attach_controller", 00:23:59.628 "req_id": 1 00:23:59.628 } 00:23:59.628 Got JSON-RPC error response 00:23:59.628 response: 00:23:59.628 { 00:23:59.628 "code": -5, 00:23:59.628 "message": "Input/output error" 00:23:59.628 } 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.628 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:59.889 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:59.890 request: 00:23:59.890 { 00:23:59.890 "name": "nvme0", 00:23:59.890 "trtype": "tcp", 00:23:59.890 "traddr": "10.0.0.2", 00:23:59.890 "adrfam": "ipv4", 00:23:59.890 "trsvcid": "4420", 00:23:59.890 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:59.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:59.890 "prchk_reftag": false, 00:23:59.890 "prchk_guard": false, 00:23:59.890 "hdgst": false, 00:23:59.890 "ddgst": false, 00:23:59.890 "dhchap_key": "key0", 00:23:59.890 "dhchap_ctrlr_key": "key1", 00:23:59.890 "method": "bdev_nvme_attach_controller", 00:23:59.890 "req_id": 1 00:23:59.890 } 00:23:59.890 Got JSON-RPC error response 00:23:59.890 response: 00:23:59.890 { 00:23:59.890 "code": -5, 00:23:59.890 "message": "Input/output error" 00:23:59.890 } 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:59.890 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:00.150 00:24:00.150 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:24:00.150 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:24:00.150 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2920328 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2920328 ']' 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2920328 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.411 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2920328 00:24:00.671 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.672 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.672 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2920328' 00:24:00.672 killing process with pid 2920328 00:24:00.672 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2920328 00:24:00.672 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2920328 00:24:01.614 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:01.614 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.614 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:24:01.614 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.614 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:24:01.614 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.614 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.614 rmmod nvme_tcp 00:24:01.614 rmmod nvme_fabrics 00:24:01.614 rmmod nvme_keyring 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2946919 ']' 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2946919 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2946919 ']' 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2946919 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2946919 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2946919' 00:24:01.875 killing process with pid 2946919 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2946919 00:24:01.875 19:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2946919 00:24:02.816 19:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.816 19:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.816 19:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.816 19:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.817 19:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.817 19:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.817 19:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.817 19:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.j7K /tmp/spdk.key-sha256.K8P /tmp/spdk.key-sha384.1TY /tmp/spdk.key-sha512.322 /tmp/spdk.key-sha512.lMr /tmp/spdk.key-sha384.4iB /tmp/spdk.key-sha256.SuY '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:04.731 00:24:04.731 real 2m27.317s 00:24:04.731 user 5m25.744s 00:24:04.731 sys 0m21.596s 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.731 ************************************ 00:24:04.731 END TEST nvmf_auth_target 00:24:04.731 ************************************ 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:04.731 ************************************ 00:24:04.731 START TEST nvmf_bdevio_no_huge 00:24:04.731 ************************************ 00:24:04.731 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:04.993 * Looking for test storage... 00:24:04.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.993 19:28:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:13.170 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:13.170 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:13.170 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.170 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:13.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:13.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:24:13.171 00:24:13.171 --- 10.0.0.2 ping statistics --- 00:24:13.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.171 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:13.171 00:24:13.171 --- 10.0.0.1 ping statistics --- 00:24:13.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.171 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:13.171 19:28:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2952302 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2952302 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2952302 ']' 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:13.171 [2024-07-22 19:28:31.128194] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:13.171 [2024-07-22 19:28:31.128342] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:13.171 [2024-07-22 19:28:31.299852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.171 [2024-07-22 19:28:31.510796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.171 [2024-07-22 19:28:31.510856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.171 [2024-07-22 19:28:31.510871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.171 [2024-07-22 19:28:31.510885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.171 [2024-07-22 19:28:31.510899] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.171 [2024-07-22 19:28:31.511120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:13.171 [2024-07-22 19:28:31.511271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:13.171 [2024-07-22 19:28:31.511483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.171 [2024-07-22 19:28:31.511510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:13.171 [2024-07-22 19:28:31.932977] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:13.171 Malloc0 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.171 19:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:13.171 [2024-07-22 19:28:32.013650] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:13.171 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:24:13.172 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:24:13.172 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:13.172 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:13.172 { 00:24:13.172 "params": { 00:24:13.172 "name": "Nvme$subsystem", 00:24:13.172 "trtype": "$TEST_TRANSPORT", 00:24:13.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.172 "adrfam": "ipv4", 00:24:13.172 "trsvcid": "$NVMF_PORT", 00:24:13.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.172 "hdgst": ${hdgst:-false}, 00:24:13.172 "ddgst": ${ddgst:-false} 00:24:13.172 }, 00:24:13.172 "method": "bdev_nvme_attach_controller" 00:24:13.172 } 00:24:13.172 EOF 00:24:13.172 )") 00:24:13.172 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:24:13.172 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:24:13.172 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:24:13.172 19:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:13.172 "params": { 00:24:13.172 "name": "Nvme1", 00:24:13.172 "trtype": "tcp", 00:24:13.172 "traddr": "10.0.0.2", 00:24:13.172 "adrfam": "ipv4", 00:24:13.172 "trsvcid": "4420", 00:24:13.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.172 "hdgst": false, 00:24:13.172 "ddgst": false 00:24:13.172 }, 00:24:13.172 "method": "bdev_nvme_attach_controller" 00:24:13.172 }' 00:24:13.172 [2024-07-22 19:28:32.102556] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:13.172 [2024-07-22 19:28:32.102675] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2952651 ] 00:24:13.433 [2024-07-22 19:28:32.247608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:13.693 [2024-07-22 19:28:32.442754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.693 [2024-07-22 19:28:32.442833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.693 [2024-07-22 19:28:32.442834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.954 I/O targets: 00:24:13.954 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:13.954 00:24:13.954 00:24:13.954 CUnit - A unit testing framework for C - Version 2.1-3 00:24:13.954 http://cunit.sourceforge.net/ 00:24:13.954 00:24:13.954 00:24:13.954 Suite: bdevio tests on: Nvme1n1 00:24:13.954 Test: blockdev write read block ...passed 00:24:13.955 Test: blockdev write zeroes read block ...passed 00:24:13.955 Test: blockdev write zeroes read no split ...passed 00:24:14.216 Test: blockdev write zeroes read split ...passed 00:24:14.216 Test: blockdev write zeroes read split partial ...passed 00:24:14.216 Test: blockdev reset ...[2024-07-22 19:28:32.980122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.216 [2024-07-22 19:28:32.980238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000386600 (9): Bad file descriptor 00:24:14.216 [2024-07-22 19:28:32.997197] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:14.216 passed 00:24:14.216 Test: blockdev write read 8 blocks ...passed 00:24:14.216 Test: blockdev write read size > 128k ...passed 00:24:14.216 Test: blockdev write read invalid size ...passed 00:24:14.216 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:14.216 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:14.216 Test: blockdev write read max offset ...passed 00:24:14.216 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:14.478 Test: blockdev writev readv 8 blocks ...passed 00:24:14.478 Test: blockdev writev readv 30 x 1block ...passed 00:24:14.478 Test: blockdev writev readv block ...passed 00:24:14.478 Test: blockdev writev readv size > 128k ...passed 00:24:14.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:14.478 Test: blockdev comparev and writev ...[2024-07-22 19:28:33.226596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:14.478 [2024-07-22 19:28:33.226634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.226652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:14.478 [2024-07-22 19:28:33.226662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.227254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:14.478 [2024-07-22 19:28:33.227268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.227281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:14.478 [2024-07-22 19:28:33.227291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.227873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:14.478 [2024-07-22 19:28:33.227888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.227901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:14.478 [2024-07-22 19:28:33.227909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.228518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:14.478 [2024-07-22 19:28:33.228533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.228545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:14.478 [2024-07-22 19:28:33.228553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:14.478 passed 00:24:14.478 Test: blockdev nvme passthru rw ...passed 00:24:14.478 Test: blockdev nvme passthru vendor specific ...[2024-07-22 19:28:33.313190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:14.478 [2024-07-22 19:28:33.313216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.313686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:14.478 [2024-07-22 19:28:33.313699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.314108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:14.478 [2024-07-22 19:28:33.314119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.478 [2024-07-22 19:28:33.314531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:14.478 [2024-07-22 19:28:33.314543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.478 passed 00:24:14.478 Test: blockdev nvme admin passthru ...passed 00:24:14.478 Test: blockdev copy ...passed 00:24:14.478 00:24:14.478 Run Summary: Type Total Ran Passed Failed Inactive 00:24:14.478 suites 1 1 n/a 0 0 00:24:14.478 tests 23 23 23 0 0 00:24:14.478 asserts 152 152 152 0 n/a 00:24:14.478 00:24:14.478 Elapsed time = 1.194 seconds 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:15.049 rmmod nvme_tcp 00:24:15.049 rmmod nvme_fabrics 00:24:15.049 rmmod nvme_keyring 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:24:15.049 19:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2952302 ']' 00:24:15.049 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2952302 00:24:15.049 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2952302 ']' 00:24:15.049 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2952302 00:24:15.310 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:24:15.310 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.310 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2952302 00:24:15.310 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:24:15.310 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:24:15.310 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2952302' 00:24:15.310 killing process with pid 2952302 00:24:15.310 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2952302 00:24:15.310 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2952302 00:24:15.571 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:15.571 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:15.571 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:15.571 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:15.571 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:15.571 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.571 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.571 19:28:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.119 00:24:18.119 real 0m12.917s 00:24:18.119 user 0m17.151s 00:24:18.119 sys 0m6.438s 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:18.119 ************************************ 00:24:18.119 END TEST nvmf_bdevio_no_huge 00:24:18.119 ************************************ 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:18.119 ************************************ 00:24:18.119 START TEST nvmf_tls 00:24:18.119 ************************************ 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:18.119 * Looking for test storage... 00:24:18.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.119 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.120 19:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:26.266 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.266 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:26.267 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:26.267 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:26.267 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:26.267 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:26.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:24:26.267 00:24:26.267 --- 10.0.0.2 ping statistics --- 00:24:26.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.267 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:24:26.267 00:24:26.267 --- 10.0.0.1 ping statistics --- 00:24:26.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.267 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2957137 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2957137 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2957137 ']' 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.267 [2024-07-22 19:28:44.200527] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:26.267 [2024-07-22 19:28:44.200641] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.267 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.267 [2024-07-22 19:28:44.352049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.267 [2024-07-22 19:28:44.578619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.267 [2024-07-22 19:28:44.578686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.267 [2024-07-22 19:28:44.578701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.267 [2024-07-22 19:28:44.578711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.267 [2024-07-22 19:28:44.578723] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.267 [2024-07-22 19:28:44.578760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:24:26.267 19:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:26.267 true 00:24:26.267 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:26.267 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:24:26.529 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:24:26.529 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:24:26.529 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:26.790 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:26.790 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:24:26.790 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:24:26.790 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:24:26.790 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:27.051 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:27.051 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:24:27.051 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:24:27.051 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:24:27.051 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:27.051 19:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:24:27.312 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:24:27.312 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:24:27.312 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:27.573 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:27.573 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:24:27.573 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:24:27.573 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:24:27.573 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:27.833 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:28.094 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:24:28.095 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Ai0HM4rUy4 00:24:28.095 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:28.095 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.QGfVgDVYtd 00:24:28.095 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:28.095 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:28.095 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Ai0HM4rUy4 00:24:28.095 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QGfVgDVYtd 00:24:28.095 19:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:28.095 19:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:28.666 19:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Ai0HM4rUy4 00:24:28.666 19:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ai0HM4rUy4 00:24:28.666 19:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:28.666 [2024-07-22 19:28:47.505078] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.666 19:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:28.926 19:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:28.926 [2024-07-22 19:28:47.845952] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.926 [2024-07-22 19:28:47.846158] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.926 19:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:29.187 malloc0 00:24:29.187 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:29.448 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ai0HM4rUy4 00:24:29.448 [2024-07-22 19:28:48.362256] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:29.448 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Ai0HM4rUy4 00:24:29.709 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.706 Initializing NVMe Controllers 00:24:39.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:39.706 Initialization complete. Launching workers. 00:24:39.706 ======================================================== 00:24:39.706 Latency(us) 00:24:39.706 Device Information : IOPS MiB/s Average min max 00:24:39.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15452.92 60.36 4141.74 1616.46 5058.70 00:24:39.706 ======================================================== 00:24:39.706 Total : 15452.92 60.36 4141.74 1616.46 5058.70 00:24:39.706 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ai0HM4rUy4 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ai0HM4rUy4' 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2960044 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2960044 /var/tmp/bdevperf.sock 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2960044 ']' 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.706 19:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.706 [2024-07-22 19:28:58.651091] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:39.706 [2024-07-22 19:28:58.651193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960044 ] 00:24:39.967 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.967 [2024-07-22 19:28:58.747841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.967 [2024-07-22 19:28:58.881353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.538 19:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.538 19:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:40.538 19:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ai0HM4rUy4 00:24:40.800 [2024-07-22 19:28:59.524031] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:40.800 [2024-07-22 19:28:59.524133] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:40.800 TLSTESTn1 00:24:40.800 19:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:40.800 Running I/O for 10 seconds... 00:24:53.033 00:24:53.033 Latency(us) 00:24:53.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.033 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:53.033 Verification LBA range: start 0x0 length 0x2000 00:24:53.033 TLSTESTn1 : 10.05 5145.45 20.10 0.00 0.00 24801.76 6225.92 78206.29 00:24:53.034 =================================================================================================================== 00:24:53.034 Total : 5145.45 20.10 0.00 0.00 24801.76 6225.92 78206.29 00:24:53.034 0 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2960044 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2960044 ']' 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2960044 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2960044 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2960044' 00:24:53.034 killing process with pid 2960044 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2960044 00:24:53.034 Received shutdown signal, test time was about 10.000000 seconds 00:24:53.034 00:24:53.034 Latency(us) 00:24:53.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.034 =================================================================================================================== 00:24:53.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:53.034 [2024-07-22 19:29:09.875336] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:53.034 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2960044 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QGfVgDVYtd 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QGfVgDVYtd 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QGfVgDVYtd 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QGfVgDVYtd' 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2962190 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2962190 /var/tmp/bdevperf.sock 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2962190 ']' 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.034 19:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.034 [2024-07-22 19:29:10.498238] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:53.034 [2024-07-22 19:29:10.498346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962190 ] 00:24:53.034 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.034 [2024-07-22 19:29:10.596534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.034 [2024-07-22 19:29:10.733692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QGfVgDVYtd 00:24:53.034 [2024-07-22 19:29:11.368532] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:53.034 [2024-07-22 19:29:11.368627] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:53.034 [2024-07-22 19:29:11.377523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:53.034 [2024-07-22 19:29:11.378132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (107): Transport endpoint is not connected 00:24:53.034 [2024-07-22 19:29:11.379115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:24:53.034 [2024-07-22 19:29:11.380116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.034 [2024-07-22 19:29:11.380131] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:53.034 [2024-07-22 19:29:11.380144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.034 request: 00:24:53.034 { 00:24:53.034 "name": "TLSTEST", 00:24:53.034 "trtype": "tcp", 00:24:53.034 "traddr": "10.0.0.2", 00:24:53.034 "adrfam": "ipv4", 00:24:53.034 "trsvcid": "4420", 00:24:53.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:53.034 "prchk_reftag": false, 00:24:53.034 "prchk_guard": false, 00:24:53.034 "hdgst": false, 00:24:53.034 "ddgst": false, 00:24:53.034 "psk": "/tmp/tmp.QGfVgDVYtd", 00:24:53.034 "method": "bdev_nvme_attach_controller", 00:24:53.034 "req_id": 1 00:24:53.034 } 00:24:53.034 Got JSON-RPC error response 00:24:53.034 response: 00:24:53.034 { 00:24:53.034 "code": -5, 00:24:53.034 "message": "Input/output error" 00:24:53.034 } 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2962190 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2962190 ']' 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2962190 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2962190 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2962190' 00:24:53.034 killing process with pid 2962190 00:24:53.034 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2962190 00:24:53.034 Received shutdown signal, test time was about 10.000000 seconds 00:24:53.034 00:24:53.034 Latency(us) 00:24:53.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.035 =================================================================================================================== 00:24:53.035 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:53.035 [2024-07-22 19:29:11.462274] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2962190 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ai0HM4rUy4 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ai0HM4rUy4 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ai0HM4rUy4 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ai0HM4rUy4' 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2962438 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2962438 /var/tmp/bdevperf.sock 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2962438 ']' 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.035 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.339 [2024-07-22 19:29:12.038971] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:53.339 [2024-07-22 19:29:12.039082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962438 ] 00:24:53.339 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.339 [2024-07-22 19:29:12.136659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.339 [2024-07-22 19:29:12.271532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.937 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.937 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:53.937 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Ai0HM4rUy4 00:24:54.198 [2024-07-22 19:29:12.922641] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:54.198 [2024-07-22 19:29:12.922737] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:54.199 [2024-07-22 19:29:12.931139] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:54.199 [2024-07-22 19:29:12.931164] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:54.199 [2024-07-22 19:29:12.931196] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:54.199 [2024-07-22 19:29:12.932126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (107): Transport endpoint is not connected 00:24:54.199 [2024-07-22 19:29:12.933107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:24:54.199 [2024-07-22 19:29:12.934108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.199 [2024-07-22 19:29:12.934123] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:54.199 [2024-07-22 19:29:12.934136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.199 request: 00:24:54.199 { 00:24:54.199 "name": "TLSTEST", 00:24:54.199 "trtype": "tcp", 00:24:54.199 "traddr": "10.0.0.2", 00:24:54.199 "adrfam": "ipv4", 00:24:54.199 "trsvcid": "4420", 00:24:54.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.199 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:54.199 "prchk_reftag": false, 00:24:54.199 "prchk_guard": false, 00:24:54.199 "hdgst": false, 00:24:54.199 "ddgst": false, 00:24:54.199 "psk": "/tmp/tmp.Ai0HM4rUy4", 00:24:54.199 "method": "bdev_nvme_attach_controller", 00:24:54.199 "req_id": 1 00:24:54.199 } 00:24:54.199 Got JSON-RPC error response 00:24:54.199 response: 00:24:54.199 { 00:24:54.199 "code": -5, 00:24:54.199 "message": "Input/output error" 00:24:54.199 } 00:24:54.199 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2962438 00:24:54.199 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2962438 ']' 00:24:54.199 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2962438 00:24:54.199 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:54.199 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.199 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2962438 00:24:54.199 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:54.199 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:54.199 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2962438' 00:24:54.199 killing process with pid 2962438 00:24:54.199 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2962438 00:24:54.199 Received shutdown signal, test time was about 10.000000 seconds 00:24:54.199 00:24:54.199 Latency(us) 00:24:54.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.199 =================================================================================================================== 00:24:54.199 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:54.199 [2024-07-22 19:29:13.018416] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:54.199 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2962438 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ai0HM4rUy4 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ai0HM4rUy4 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ai0HM4rUy4 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ai0HM4rUy4' 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2962756 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2962756 /var/tmp/bdevperf.sock 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2962756 ']' 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.771 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.772 19:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.772 [2024-07-22 19:29:13.596997] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:54.772 [2024-07-22 19:29:13.597108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962756 ] 00:24:54.772 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.772 [2024-07-22 19:29:13.694815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.033 [2024-07-22 19:29:13.831829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.607 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.607 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:55.607 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ai0HM4rUy4 00:24:55.607 [2024-07-22 19:29:14.490979] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.607 [2024-07-22 19:29:14.491075] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:55.607 [2024-07-22 19:29:14.498117] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:55.607 [2024-07-22 19:29:14.498144] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:55.607 [2024-07-22 19:29:14.498173] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:55.607 [2024-07-22 19:29:14.498489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (107): Transport endpoint is not connected 00:24:55.607 [2024-07-22 19:29:14.499471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:24:55.607 [2024-07-22 19:29:14.500473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:55.607 [2024-07-22 19:29:14.500492] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:55.607 [2024-07-22 19:29:14.500502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:55.607 request: 00:24:55.607 { 00:24:55.607 "name": "TLSTEST", 00:24:55.607 "trtype": "tcp", 00:24:55.607 "traddr": "10.0.0.2", 00:24:55.607 "adrfam": "ipv4", 00:24:55.607 "trsvcid": "4420", 00:24:55.607 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:55.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:55.607 "prchk_reftag": false, 00:24:55.607 "prchk_guard": false, 00:24:55.607 "hdgst": false, 00:24:55.607 "ddgst": false, 00:24:55.607 "psk": "/tmp/tmp.Ai0HM4rUy4", 00:24:55.607 "method": "bdev_nvme_attach_controller", 00:24:55.607 "req_id": 1 00:24:55.607 } 00:24:55.607 Got JSON-RPC error response 00:24:55.607 response: 00:24:55.607 { 00:24:55.607 "code": -5, 00:24:55.607 "message": "Input/output error" 00:24:55.607 } 00:24:55.607 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2962756 00:24:55.607 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2962756 ']' 00:24:55.607 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2962756 00:24:55.607 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:55.607 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.607 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2962756 00:24:55.873 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:55.873 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:55.873 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2962756' 00:24:55.873 killing process with pid 2962756 00:24:55.873 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2962756 00:24:55.873 Received shutdown signal, test time was about 10.000000 seconds 00:24:55.873 00:24:55.873 Latency(us) 00:24:55.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.873 =================================================================================================================== 00:24:55.873 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:55.873 [2024-07-22 19:29:14.586759] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:55.873 19:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2962756 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:56.135 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2963099 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2963099 /var/tmp/bdevperf.sock 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2963099 ']' 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.396 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.396 [2024-07-22 19:29:15.173077] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:56.396 [2024-07-22 19:29:15.173190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963099 ] 00:24:56.396 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.396 [2024-07-22 19:29:15.270170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.657 [2024-07-22 19:29:15.403842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.230 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.230 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:57.230 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:57.230 [2024-07-22 19:29:16.058863] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:57.230 [2024-07-22 19:29:16.060256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388180 (9): Bad file descriptor 00:24:57.230 [2024-07-22 19:29:16.061252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.230 [2024-07-22 19:29:16.061274] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:57.230 [2024-07-22 19:29:16.061285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.230 request: 00:24:57.230 { 00:24:57.230 "name": "TLSTEST", 00:24:57.230 "trtype": "tcp", 00:24:57.230 "traddr": "10.0.0.2", 00:24:57.230 "adrfam": "ipv4", 00:24:57.230 "trsvcid": "4420", 00:24:57.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:57.230 "prchk_reftag": false, 00:24:57.230 "prchk_guard": false, 00:24:57.230 "hdgst": false, 00:24:57.230 "ddgst": false, 00:24:57.230 "method": "bdev_nvme_attach_controller", 00:24:57.230 "req_id": 1 00:24:57.230 } 00:24:57.230 Got JSON-RPC error response 00:24:57.230 response: 00:24:57.230 { 00:24:57.230 "code": -5, 00:24:57.230 "message": "Input/output error" 00:24:57.230 } 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2963099 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2963099 ']' 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2963099 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2963099 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2963099' 00:24:57.230 killing process with pid 2963099 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2963099 00:24:57.230 Received shutdown signal, test time was about 10.000000 seconds 00:24:57.230 00:24:57.230 Latency(us) 00:24:57.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.230 =================================================================================================================== 00:24:57.230 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:57.230 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2963099 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2957137 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2957137 ']' 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2957137 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2957137 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2957137' 00:24:57.802 killing process with pid 2957137 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2957137 00:24:57.802 [2024-07-22 19:29:16.695516] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:57.802 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2957137 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.IMCPSUYoQS 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.IMCPSUYoQS 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2963625 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2963625 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2963625 ']' 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.746 19:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:58.746 [2024-07-22 19:29:17.568659] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:58.746 [2024-07-22 19:29:17.568773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.746 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.008 [2024-07-22 19:29:17.707247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.008 [2024-07-22 19:29:17.855894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.008 [2024-07-22 19:29:17.855931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.008 [2024-07-22 19:29:17.855941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.008 [2024-07-22 19:29:17.855947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.008 [2024-07-22 19:29:17.855956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.008 [2024-07-22 19:29:17.855982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.IMCPSUYoQS 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMCPSUYoQS 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:59.580 [2024-07-22 19:29:18.467943] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.580 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:59.842 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:59.842 [2024-07-22 19:29:18.772711] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:59.842 [2024-07-22 19:29:18.772924] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.842 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:00.103 malloc0 00:25:00.103 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMCPSUYoQS 00:25:00.364 [2024-07-22 19:29:19.277215] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMCPSUYoQS 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IMCPSUYoQS' 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2963990 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2963990 /var/tmp/bdevperf.sock 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2963990 ']' 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.364 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:00.626 [2024-07-22 19:29:19.367597] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:00.626 [2024-07-22 19:29:19.367697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963990 ] 00:25:00.626 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.626 [2024-07-22 19:29:19.463226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.887 [2024-07-22 19:29:19.597406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.148 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.148 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:01.148 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMCPSUYoQS 00:25:01.408 [2024-07-22 19:29:20.240047] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:01.408 [2024-07-22 19:29:20.240135] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:01.408 TLSTESTn1 00:25:01.408 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:01.669 Running I/O for 10 seconds... 00:25:11.683 00:25:11.683 Latency(us) 00:25:11.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.683 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:11.683 Verification LBA range: start 0x0 length 0x2000 00:25:11.683 TLSTESTn1 : 10.06 4892.77 19.11 0.00 0.00 26073.42 6498.99 60293.12 00:25:11.683 =================================================================================================================== 00:25:11.683 Total : 4892.77 19.11 0.00 0.00 26073.42 6498.99 60293.12 00:25:11.683 0 00:25:11.683 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:11.683 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2963990 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2963990 ']' 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2963990 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2963990 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2963990' 00:25:11.684 killing process with pid 2963990 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2963990 00:25:11.684 Received shutdown signal, test time was about 10.000000 seconds 00:25:11.684 00:25:11.684 Latency(us) 00:25:11.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.684 =================================================================================================================== 00:25:11.684 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.684 [2024-07-22 19:29:30.596079] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:11.684 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2963990 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.IMCPSUYoQS 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMCPSUYoQS 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMCPSUYoQS 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMCPSUYoQS 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IMCPSUYoQS' 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2966165 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2966165 /var/tmp/bdevperf.sock 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2966165 ']' 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.255 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:12.255 [2024-07-22 19:29:31.206411] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:12.255 [2024-07-22 19:29:31.206522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2966165 ] 00:25:12.516 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.516 [2024-07-22 19:29:31.305652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.516 [2024-07-22 19:29:31.439901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.088 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.088 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:13.088 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMCPSUYoQS 00:25:13.348 [2024-07-22 19:29:32.070474] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:13.348 [2024-07-22 19:29:32.070524] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:13.348 [2024-07-22 19:29:32.070535] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.IMCPSUYoQS 00:25:13.348 request: 00:25:13.348 { 00:25:13.348 "name": "TLSTEST", 00:25:13.348 "trtype": "tcp", 00:25:13.348 "traddr": "10.0.0.2", 00:25:13.348 "adrfam": "ipv4", 00:25:13.348 "trsvcid": "4420", 00:25:13.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:13.348 "prchk_reftag": false, 00:25:13.348 "prchk_guard": false, 00:25:13.348 "hdgst": false, 00:25:13.348 "ddgst": false, 00:25:13.348 "psk": "/tmp/tmp.IMCPSUYoQS", 00:25:13.348 "method": "bdev_nvme_attach_controller", 00:25:13.348 "req_id": 1 00:25:13.348 } 00:25:13.348 Got JSON-RPC error response 00:25:13.348 response: 00:25:13.348 { 00:25:13.349 "code": -1, 00:25:13.349 "message": "Operation not permitted" 00:25:13.349 } 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2966165 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2966165 ']' 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2966165 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2966165 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2966165' 00:25:13.349 killing process with pid 2966165 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2966165 00:25:13.349 Received shutdown signal, test time was about 10.000000 seconds 00:25:13.349 00:25:13.349 Latency(us) 00:25:13.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.349 =================================================================================================================== 00:25:13.349 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:13.349 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2966165 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2963625 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2963625 ']' 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2963625 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2963625 00:25:13.920 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:13.921 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:13.921 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2963625' 00:25:13.921 killing process with pid 2963625 00:25:13.921 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2963625 00:25:13.921 [2024-07-22 19:29:32.695981] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:13.921 19:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2963625 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2966731 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2966731 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2966731 ']' 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.496 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.760 [2024-07-22 19:29:33.482533] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:14.760 [2024-07-22 19:29:33.482640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.760 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.760 [2024-07-22 19:29:33.624744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.021 [2024-07-22 19:29:33.767582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.021 [2024-07-22 19:29:33.767624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.021 [2024-07-22 19:29:33.767633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.021 [2024-07-22 19:29:33.767640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.021 [2024-07-22 19:29:33.767648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.021 [2024-07-22 19:29:33.767670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.283 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.283 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:15.283 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:15.283 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:15.283 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.IMCPSUYoQS 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.IMCPSUYoQS 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.IMCPSUYoQS 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMCPSUYoQS 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:15.544 [2024-07-22 19:29:34.395727] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.544 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:15.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:15.805 [2024-07-22 19:29:34.688453] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:15.805 [2024-07-22 19:29:34.688659] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:16.066 malloc0 00:25:16.066 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:16.326 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMCPSUYoQS 00:25:16.326 [2024-07-22 19:29:35.162720] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:16.326 [2024-07-22 19:29:35.162750] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:25:16.326 [2024-07-22 19:29:35.162772] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:16.326 request: 00:25:16.326 { 00:25:16.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.326 "host": "nqn.2016-06.io.spdk:host1", 00:25:16.326 "psk": "/tmp/tmp.IMCPSUYoQS", 00:25:16.326 "method": "nvmf_subsystem_add_host", 00:25:16.326 "req_id": 1 00:25:16.326 } 00:25:16.327 Got JSON-RPC error response 00:25:16.327 response: 00:25:16.327 { 00:25:16.327 "code": -32603, 00:25:16.327 "message": "Internal error" 00:25:16.327 } 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2966731 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2966731 ']' 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2966731 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2966731 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2966731' 00:25:16.327 killing process with pid 2966731 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2966731 00:25:16.327 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2966731 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.IMCPSUYoQS 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2967209 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2967209 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2967209 ']' 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.269 19:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.269 [2024-07-22 19:29:36.001992] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:17.269 [2024-07-22 19:29:36.002090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.269 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.269 [2024-07-22 19:29:36.135045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.530 [2024-07-22 19:29:36.269379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.530 [2024-07-22 19:29:36.269420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.530 [2024-07-22 19:29:36.269429] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.530 [2024-07-22 19:29:36.269436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.530 [2024-07-22 19:29:36.269445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.530 [2024-07-22 19:29:36.269469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.IMCPSUYoQS 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMCPSUYoQS 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:18.101 [2024-07-22 19:29:36.946007] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.101 19:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:18.362 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:18.362 [2024-07-22 19:29:37.246765] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:18.362 [2024-07-22 19:29:37.246976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.362 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:18.622 malloc0 00:25:18.622 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMCPSUYoQS 00:25:18.883 [2024-07-22 19:29:37.729536] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2967572 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2967572 /var/tmp/bdevperf.sock 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2967572 ']' 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:18.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.883 19:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.883 [2024-07-22 19:29:37.802491] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:18.883 [2024-07-22 19:29:37.802599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2967572 ] 00:25:19.179 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.179 [2024-07-22 19:29:37.900614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.179 [2024-07-22 19:29:38.035967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.750 19:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.750 19:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:19.750 19:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMCPSUYoQS 00:25:20.010 [2024-07-22 19:29:38.707708] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:20.010 [2024-07-22 19:29:38.707811] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:20.010 TLSTESTn1 00:25:20.010 19:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:25:20.270 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:25:20.270 "subsystems": [ 00:25:20.270 { 00:25:20.270 "subsystem": "keyring", 00:25:20.270 "config": [] 00:25:20.270 }, 00:25:20.270 { 00:25:20.270 "subsystem": "iobuf", 00:25:20.270 "config": [ 00:25:20.270 { 00:25:20.270 "method": "iobuf_set_options", 00:25:20.270 "params": { 00:25:20.270 "small_pool_count": 8192, 00:25:20.270 "large_pool_count": 1024, 00:25:20.270 "small_bufsize": 8192, 00:25:20.270 "large_bufsize": 135168 00:25:20.270 } 00:25:20.270 } 00:25:20.270 ] 00:25:20.270 }, 00:25:20.270 { 00:25:20.270 "subsystem": "sock", 00:25:20.270 "config": [ 00:25:20.270 { 00:25:20.270 "method": "sock_set_default_impl", 00:25:20.270 "params": { 00:25:20.270 "impl_name": "posix" 00:25:20.270 } 00:25:20.270 }, 00:25:20.270 { 00:25:20.270 "method": "sock_impl_set_options", 00:25:20.270 "params": { 00:25:20.270 "impl_name": "ssl", 00:25:20.270 "recv_buf_size": 4096, 00:25:20.270 "send_buf_size": 4096, 00:25:20.270 "enable_recv_pipe": true, 00:25:20.270 "enable_quickack": false, 00:25:20.270 "enable_placement_id": 0, 00:25:20.270 "enable_zerocopy_send_server": true, 00:25:20.270 "enable_zerocopy_send_client": false, 00:25:20.270 "zerocopy_threshold": 0, 00:25:20.270 "tls_version": 0, 00:25:20.270 "enable_ktls": false 00:25:20.270 } 00:25:20.270 }, 00:25:20.270 { 00:25:20.270 "method": "sock_impl_set_options", 00:25:20.270 "params": { 00:25:20.270 "impl_name": "posix", 00:25:20.270 "recv_buf_size": 2097152, 00:25:20.271 "send_buf_size": 2097152, 00:25:20.271 "enable_recv_pipe": true, 00:25:20.271 "enable_quickack": false, 00:25:20.271 "enable_placement_id": 0, 00:25:20.271 "enable_zerocopy_send_server": true, 00:25:20.271 "enable_zerocopy_send_client": false, 00:25:20.271 "zerocopy_threshold": 0, 00:25:20.271 "tls_version": 0, 00:25:20.271 "enable_ktls": false 00:25:20.271 } 00:25:20.271 } 00:25:20.271 ] 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "subsystem": "vmd", 00:25:20.271 "config": [] 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "subsystem": "accel", 00:25:20.271 "config": [ 00:25:20.271 { 00:25:20.271 "method": "accel_set_options", 00:25:20.271 "params": { 00:25:20.271 "small_cache_size": 128, 00:25:20.271 "large_cache_size": 16, 00:25:20.271 "task_count": 2048, 00:25:20.271 "sequence_count": 2048, 00:25:20.271 "buf_count": 2048 00:25:20.271 } 00:25:20.271 } 00:25:20.271 ] 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "subsystem": "bdev", 00:25:20.271 "config": [ 00:25:20.271 { 00:25:20.271 "method": "bdev_set_options", 00:25:20.271 "params": { 00:25:20.271 "bdev_io_pool_size": 65535, 00:25:20.271 "bdev_io_cache_size": 256, 00:25:20.271 "bdev_auto_examine": true, 00:25:20.271 "iobuf_small_cache_size": 128, 00:25:20.271 "iobuf_large_cache_size": 16 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "bdev_raid_set_options", 00:25:20.271 "params": { 00:25:20.271 "process_window_size_kb": 1024, 00:25:20.271 "process_max_bandwidth_mb_sec": 0 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "bdev_iscsi_set_options", 00:25:20.271 "params": { 00:25:20.271 "timeout_sec": 30 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "bdev_nvme_set_options", 00:25:20.271 "params": { 00:25:20.271 "action_on_timeout": "none", 00:25:20.271 "timeout_us": 0, 00:25:20.271 "timeout_admin_us": 0, 00:25:20.271 "keep_alive_timeout_ms": 10000, 00:25:20.271 "arbitration_burst": 0, 00:25:20.271 "low_priority_weight": 0, 00:25:20.271 "medium_priority_weight": 0, 00:25:20.271 "high_priority_weight": 0, 00:25:20.271 "nvme_adminq_poll_period_us": 10000, 00:25:20.271 "nvme_ioq_poll_period_us": 0, 00:25:20.271 "io_queue_requests": 0, 00:25:20.271 "delay_cmd_submit": true, 00:25:20.271 "transport_retry_count": 4, 00:25:20.271 "bdev_retry_count": 3, 00:25:20.271 "transport_ack_timeout": 0, 00:25:20.271 "ctrlr_loss_timeout_sec": 0, 00:25:20.271 "reconnect_delay_sec": 0, 00:25:20.271 "fast_io_fail_timeout_sec": 0, 00:25:20.271 "disable_auto_failback": false, 00:25:20.271 "generate_uuids": false, 00:25:20.271 "transport_tos": 0, 00:25:20.271 "nvme_error_stat": false, 00:25:20.271 "rdma_srq_size": 0, 00:25:20.271 "io_path_stat": false, 00:25:20.271 "allow_accel_sequence": false, 00:25:20.271 "rdma_max_cq_size": 0, 00:25:20.271 "rdma_cm_event_timeout_ms": 0, 00:25:20.271 "dhchap_digests": [ 00:25:20.271 "sha256", 00:25:20.271 "sha384", 00:25:20.271 "sha512" 00:25:20.271 ], 00:25:20.271 "dhchap_dhgroups": [ 00:25:20.271 "null", 00:25:20.271 "ffdhe2048", 00:25:20.271 "ffdhe3072", 00:25:20.271 "ffdhe4096", 00:25:20.271 "ffdhe6144", 00:25:20.271 "ffdhe8192" 00:25:20.271 ] 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "bdev_nvme_set_hotplug", 00:25:20.271 "params": { 00:25:20.271 "period_us": 100000, 00:25:20.271 "enable": false 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "bdev_malloc_create", 00:25:20.271 "params": { 00:25:20.271 "name": "malloc0", 00:25:20.271 "num_blocks": 8192, 00:25:20.271 "block_size": 4096, 00:25:20.271 "physical_block_size": 4096, 00:25:20.271 "uuid": "63479e1e-bdce-4c25-b11f-33e706029b5d", 00:25:20.271 "optimal_io_boundary": 0, 00:25:20.271 "md_size": 0, 00:25:20.271 "dif_type": 0, 00:25:20.271 "dif_is_head_of_md": false, 00:25:20.271 "dif_pi_format": 0 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "bdev_wait_for_examine" 00:25:20.271 } 00:25:20.271 ] 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "subsystem": "nbd", 00:25:20.271 "config": [] 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "subsystem": "scheduler", 00:25:20.271 "config": [ 00:25:20.271 { 00:25:20.271 "method": "framework_set_scheduler", 00:25:20.271 "params": { 00:25:20.271 "name": "static" 00:25:20.271 } 00:25:20.271 } 00:25:20.271 ] 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "subsystem": "nvmf", 00:25:20.271 "config": [ 00:25:20.271 { 00:25:20.271 "method": "nvmf_set_config", 00:25:20.271 "params": { 00:25:20.271 "discovery_filter": "match_any", 00:25:20.271 "admin_cmd_passthru": { 00:25:20.271 "identify_ctrlr": false 00:25:20.271 } 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "nvmf_set_max_subsystems", 00:25:20.271 "params": { 00:25:20.271 "max_subsystems": 1024 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "nvmf_set_crdt", 00:25:20.271 "params": { 00:25:20.271 "crdt1": 0, 00:25:20.271 "crdt2": 0, 00:25:20.271 "crdt3": 0 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "nvmf_create_transport", 00:25:20.271 "params": { 00:25:20.271 "trtype": "TCP", 00:25:20.271 "max_queue_depth": 128, 00:25:20.271 "max_io_qpairs_per_ctrlr": 127, 00:25:20.271 "in_capsule_data_size": 4096, 00:25:20.271 "max_io_size": 131072, 00:25:20.271 "io_unit_size": 131072, 00:25:20.271 "max_aq_depth": 128, 00:25:20.271 "num_shared_buffers": 511, 00:25:20.271 "buf_cache_size": 4294967295, 00:25:20.271 "dif_insert_or_strip": false, 00:25:20.271 "zcopy": false, 00:25:20.271 "c2h_success": false, 00:25:20.271 "sock_priority": 0, 00:25:20.271 "abort_timeout_sec": 1, 00:25:20.271 "ack_timeout": 0, 00:25:20.271 "data_wr_pool_size": 0 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "nvmf_create_subsystem", 00:25:20.271 "params": { 00:25:20.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.271 "allow_any_host": false, 00:25:20.271 "serial_number": "SPDK00000000000001", 00:25:20.271 "model_number": "SPDK bdev Controller", 00:25:20.271 "max_namespaces": 10, 00:25:20.271 "min_cntlid": 1, 00:25:20.271 "max_cntlid": 65519, 00:25:20.271 "ana_reporting": false 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "nvmf_subsystem_add_host", 00:25:20.271 "params": { 00:25:20.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.271 "host": "nqn.2016-06.io.spdk:host1", 00:25:20.271 "psk": "/tmp/tmp.IMCPSUYoQS" 00:25:20.271 } 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "method": "nvmf_subsystem_add_ns", 00:25:20.271 "params": { 00:25:20.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.271 "namespace": { 00:25:20.271 "nsid": 1, 00:25:20.272 "bdev_name": "malloc0", 00:25:20.272 "nguid": "63479E1EBDCE4C25B11F33E706029B5D", 00:25:20.272 "uuid": "63479e1e-bdce-4c25-b11f-33e706029b5d", 00:25:20.272 "no_auto_visible": false 00:25:20.272 } 00:25:20.272 } 00:25:20.272 }, 00:25:20.272 { 00:25:20.272 "method": "nvmf_subsystem_add_listener", 00:25:20.272 "params": { 00:25:20.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.272 "listen_address": { 00:25:20.272 "trtype": "TCP", 00:25:20.272 "adrfam": "IPv4", 00:25:20.272 "traddr": "10.0.0.2", 00:25:20.272 "trsvcid": "4420" 00:25:20.272 }, 00:25:20.272 "secure_channel": true 00:25:20.272 } 00:25:20.272 } 00:25:20.272 ] 00:25:20.272 } 00:25:20.272 ] 00:25:20.272 }' 00:25:20.272 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:20.532 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:25:20.532 "subsystems": [ 00:25:20.532 { 00:25:20.532 "subsystem": "keyring", 00:25:20.532 "config": [] 00:25:20.532 }, 00:25:20.532 { 00:25:20.532 "subsystem": "iobuf", 00:25:20.532 "config": [ 00:25:20.532 { 00:25:20.532 "method": "iobuf_set_options", 00:25:20.532 "params": { 00:25:20.532 "small_pool_count": 8192, 00:25:20.532 "large_pool_count": 1024, 00:25:20.532 "small_bufsize": 8192, 00:25:20.532 "large_bufsize": 135168 00:25:20.532 } 00:25:20.532 } 00:25:20.532 ] 00:25:20.532 }, 00:25:20.532 { 00:25:20.532 "subsystem": "sock", 00:25:20.532 "config": [ 00:25:20.532 { 00:25:20.532 "method": "sock_set_default_impl", 00:25:20.532 "params": { 00:25:20.532 "impl_name": "posix" 00:25:20.532 } 00:25:20.532 }, 00:25:20.532 { 00:25:20.532 "method": "sock_impl_set_options", 00:25:20.532 "params": { 00:25:20.532 "impl_name": "ssl", 00:25:20.532 "recv_buf_size": 4096, 00:25:20.532 "send_buf_size": 4096, 00:25:20.532 "enable_recv_pipe": true, 00:25:20.532 "enable_quickack": false, 00:25:20.532 "enable_placement_id": 0, 00:25:20.532 "enable_zerocopy_send_server": true, 00:25:20.532 "enable_zerocopy_send_client": false, 00:25:20.533 "zerocopy_threshold": 0, 00:25:20.533 "tls_version": 0, 00:25:20.533 "enable_ktls": false 00:25:20.533 } 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "method": "sock_impl_set_options", 00:25:20.533 "params": { 00:25:20.533 "impl_name": "posix", 00:25:20.533 "recv_buf_size": 2097152, 00:25:20.533 "send_buf_size": 2097152, 00:25:20.533 "enable_recv_pipe": true, 00:25:20.533 "enable_quickack": false, 00:25:20.533 "enable_placement_id": 0, 00:25:20.533 "enable_zerocopy_send_server": true, 00:25:20.533 "enable_zerocopy_send_client": false, 00:25:20.533 "zerocopy_threshold": 0, 00:25:20.533 "tls_version": 0, 00:25:20.533 "enable_ktls": false 00:25:20.533 } 00:25:20.533 } 00:25:20.533 ] 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "subsystem": "vmd", 00:25:20.533 "config": [] 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "subsystem": "accel", 00:25:20.533 "config": [ 00:25:20.533 { 00:25:20.533 "method": "accel_set_options", 00:25:20.533 "params": { 00:25:20.533 "small_cache_size": 128, 00:25:20.533 "large_cache_size": 16, 00:25:20.533 "task_count": 2048, 00:25:20.533 "sequence_count": 2048, 00:25:20.533 "buf_count": 2048 00:25:20.533 } 00:25:20.533 } 00:25:20.533 ] 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "subsystem": "bdev", 00:25:20.533 "config": [ 00:25:20.533 { 00:25:20.533 "method": "bdev_set_options", 00:25:20.533 "params": { 00:25:20.533 "bdev_io_pool_size": 65535, 00:25:20.533 "bdev_io_cache_size": 256, 00:25:20.533 "bdev_auto_examine": true, 00:25:20.533 "iobuf_small_cache_size": 128, 00:25:20.533 "iobuf_large_cache_size": 16 00:25:20.533 } 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "method": "bdev_raid_set_options", 00:25:20.533 "params": { 00:25:20.533 "process_window_size_kb": 1024, 00:25:20.533 "process_max_bandwidth_mb_sec": 0 00:25:20.533 } 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "method": "bdev_iscsi_set_options", 00:25:20.533 "params": { 00:25:20.533 "timeout_sec": 30 00:25:20.533 } 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "method": "bdev_nvme_set_options", 00:25:20.533 "params": { 00:25:20.533 "action_on_timeout": "none", 00:25:20.533 "timeout_us": 0, 00:25:20.533 "timeout_admin_us": 0, 00:25:20.533 "keep_alive_timeout_ms": 10000, 00:25:20.533 "arbitration_burst": 0, 00:25:20.533 "low_priority_weight": 0, 00:25:20.533 "medium_priority_weight": 0, 00:25:20.533 "high_priority_weight": 0, 00:25:20.533 "nvme_adminq_poll_period_us": 10000, 00:25:20.533 "nvme_ioq_poll_period_us": 0, 00:25:20.533 "io_queue_requests": 512, 00:25:20.533 "delay_cmd_submit": true, 00:25:20.533 "transport_retry_count": 4, 00:25:20.533 "bdev_retry_count": 3, 00:25:20.533 "transport_ack_timeout": 0, 00:25:20.533 "ctrlr_loss_timeout_sec": 0, 00:25:20.533 "reconnect_delay_sec": 0, 00:25:20.533 "fast_io_fail_timeout_sec": 0, 00:25:20.533 "disable_auto_failback": false, 00:25:20.533 "generate_uuids": false, 00:25:20.533 "transport_tos": 0, 00:25:20.533 "nvme_error_stat": false, 00:25:20.533 "rdma_srq_size": 0, 00:25:20.533 "io_path_stat": false, 00:25:20.533 "allow_accel_sequence": false, 00:25:20.533 "rdma_max_cq_size": 0, 00:25:20.533 "rdma_cm_event_timeout_ms": 0, 00:25:20.533 "dhchap_digests": [ 00:25:20.533 "sha256", 00:25:20.533 "sha384", 00:25:20.533 "sha512" 00:25:20.533 ], 00:25:20.533 "dhchap_dhgroups": [ 00:25:20.533 "null", 00:25:20.533 "ffdhe2048", 00:25:20.533 "ffdhe3072", 00:25:20.533 "ffdhe4096", 00:25:20.533 "ffdhe6144", 00:25:20.533 "ffdhe8192" 00:25:20.533 ] 00:25:20.533 } 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "method": "bdev_nvme_attach_controller", 00:25:20.533 "params": { 00:25:20.533 "name": "TLSTEST", 00:25:20.533 "trtype": "TCP", 00:25:20.533 "adrfam": "IPv4", 00:25:20.533 "traddr": "10.0.0.2", 00:25:20.533 "trsvcid": "4420", 00:25:20.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.533 "prchk_reftag": false, 00:25:20.533 "prchk_guard": false, 00:25:20.533 "ctrlr_loss_timeout_sec": 0, 00:25:20.533 "reconnect_delay_sec": 0, 00:25:20.533 "fast_io_fail_timeout_sec": 0, 00:25:20.533 "psk": "/tmp/tmp.IMCPSUYoQS", 00:25:20.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:20.533 "hdgst": false, 00:25:20.533 "ddgst": false 00:25:20.533 } 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "method": "bdev_nvme_set_hotplug", 00:25:20.533 "params": { 00:25:20.533 "period_us": 100000, 00:25:20.533 "enable": false 00:25:20.533 } 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "method": "bdev_wait_for_examine" 00:25:20.533 } 00:25:20.533 ] 00:25:20.533 }, 00:25:20.533 { 00:25:20.533 "subsystem": "nbd", 00:25:20.533 "config": [] 00:25:20.533 } 00:25:20.533 ] 00:25:20.533 }' 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2967572 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2967572 ']' 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2967572 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2967572 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2967572' 00:25:20.533 killing process with pid 2967572 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2967572 00:25:20.533 Received shutdown signal, test time was about 10.000000 seconds 00:25:20.533 00:25:20.533 Latency(us) 00:25:20.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.533 =================================================================================================================== 00:25:20.533 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:20.533 [2024-07-22 19:29:39.357918] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:20.533 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2967572 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2967209 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2967209 ']' 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2967209 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2967209 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2967209' 00:25:21.104 killing process with pid 2967209 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2967209 00:25:21.104 [2024-07-22 19:29:39.919839] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:21.104 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2967209 00:25:21.676 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:21.676 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:21.676 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.676 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.676 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:25:21.676 "subsystems": [ 00:25:21.676 { 00:25:21.676 "subsystem": "keyring", 00:25:21.676 "config": [] 00:25:21.676 }, 00:25:21.676 { 00:25:21.676 "subsystem": "iobuf", 00:25:21.676 "config": [ 00:25:21.676 { 00:25:21.676 "method": "iobuf_set_options", 00:25:21.676 "params": { 00:25:21.676 "small_pool_count": 8192, 00:25:21.676 "large_pool_count": 1024, 00:25:21.676 "small_bufsize": 8192, 00:25:21.676 "large_bufsize": 135168 00:25:21.676 } 00:25:21.676 } 00:25:21.676 ] 00:25:21.676 }, 00:25:21.676 { 00:25:21.676 "subsystem": "sock", 00:25:21.676 "config": [ 00:25:21.676 { 00:25:21.676 "method": "sock_set_default_impl", 00:25:21.676 "params": { 00:25:21.676 "impl_name": "posix" 00:25:21.676 } 00:25:21.676 }, 00:25:21.676 { 00:25:21.676 "method": "sock_impl_set_options", 00:25:21.676 "params": { 00:25:21.676 "impl_name": "ssl", 00:25:21.676 "recv_buf_size": 4096, 00:25:21.676 "send_buf_size": 4096, 00:25:21.676 "enable_recv_pipe": true, 00:25:21.676 "enable_quickack": false, 00:25:21.676 "enable_placement_id": 0, 00:25:21.676 "enable_zerocopy_send_server": true, 00:25:21.676 "enable_zerocopy_send_client": false, 00:25:21.676 "zerocopy_threshold": 0, 00:25:21.676 "tls_version": 0, 00:25:21.676 "enable_ktls": false 00:25:21.676 } 00:25:21.676 }, 00:25:21.676 { 00:25:21.676 "method": "sock_impl_set_options", 00:25:21.676 "params": { 00:25:21.676 "impl_name": "posix", 00:25:21.676 "recv_buf_size": 2097152, 00:25:21.676 "send_buf_size": 2097152, 00:25:21.676 "enable_recv_pipe": true, 00:25:21.676 "enable_quickack": false, 00:25:21.676 "enable_placement_id": 0, 00:25:21.676 "enable_zerocopy_send_server": true, 00:25:21.676 "enable_zerocopy_send_client": false, 00:25:21.676 "zerocopy_threshold": 0, 00:25:21.676 "tls_version": 0, 00:25:21.676 "enable_ktls": false 00:25:21.676 } 00:25:21.676 } 00:25:21.676 ] 00:25:21.676 }, 00:25:21.676 { 00:25:21.676 "subsystem": "vmd", 00:25:21.676 "config": [] 00:25:21.676 }, 00:25:21.676 { 00:25:21.676 "subsystem": "accel", 00:25:21.676 "config": [ 00:25:21.676 { 00:25:21.676 "method": "accel_set_options", 00:25:21.676 "params": { 00:25:21.676 "small_cache_size": 128, 00:25:21.676 "large_cache_size": 16, 00:25:21.676 "task_count": 2048, 00:25:21.676 "sequence_count": 2048, 00:25:21.676 "buf_count": 2048 00:25:21.676 } 00:25:21.676 } 00:25:21.676 ] 00:25:21.676 }, 00:25:21.676 { 00:25:21.676 "subsystem": "bdev", 00:25:21.676 "config": [ 00:25:21.676 { 00:25:21.676 "method": "bdev_set_options", 00:25:21.676 "params": { 00:25:21.676 "bdev_io_pool_size": 65535, 00:25:21.676 "bdev_io_cache_size": 256, 00:25:21.676 "bdev_auto_examine": true, 00:25:21.676 "iobuf_small_cache_size": 128, 00:25:21.676 "iobuf_large_cache_size": 16 00:25:21.676 } 00:25:21.676 }, 00:25:21.676 { 00:25:21.676 "method": "bdev_raid_set_options", 00:25:21.676 "params": { 00:25:21.676 "process_window_size_kb": 1024, 00:25:21.676 "process_max_bandwidth_mb_sec": 0 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "bdev_iscsi_set_options", 00:25:21.677 "params": { 00:25:21.677 "timeout_sec": 30 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "bdev_nvme_set_options", 00:25:21.677 "params": { 00:25:21.677 "action_on_timeout": "none", 00:25:21.677 "timeout_us": 0, 00:25:21.677 "timeout_admin_us": 0, 00:25:21.677 "keep_alive_timeout_ms": 10000, 00:25:21.677 "arbitration_burst": 0, 00:25:21.677 "low_priority_weight": 0, 00:25:21.677 "medium_priority_weight": 0, 00:25:21.677 "high_priority_weight": 0, 00:25:21.677 "nvme_adminq_poll_period_us": 10000, 00:25:21.677 "nvme_ioq_poll_period_us": 0, 00:25:21.677 "io_queue_requests": 0, 00:25:21.677 "delay_cmd_submit": true, 00:25:21.677 "transport_retry_count": 4, 00:25:21.677 "bdev_retry_count": 3, 00:25:21.677 "transport_ack_timeout": 0, 00:25:21.677 "ctrlr_loss_timeout_sec": 0, 00:25:21.677 "reconnect_delay_sec": 0, 00:25:21.677 "fast_io_fail_timeout_sec": 0, 00:25:21.677 "disable_auto_failback": false, 00:25:21.677 "generate_uuids": false, 00:25:21.677 "transport_tos": 0, 00:25:21.677 "nvme_error_stat": false, 00:25:21.677 "rdma_srq_size": 0, 00:25:21.677 "io_path_stat": false, 00:25:21.677 "allow_accel_sequence": false, 00:25:21.677 "rdma_max_cq_size": 0, 00:25:21.677 "rdma_cm_event_timeout_ms": 0, 00:25:21.677 "dhchap_digests": [ 00:25:21.677 "sha256", 00:25:21.677 "sha384", 00:25:21.677 "sha512" 00:25:21.677 ], 00:25:21.677 "dhchap_dhgroups": [ 00:25:21.677 "null", 00:25:21.677 "ffdhe2048", 00:25:21.677 "ffdhe3072", 00:25:21.677 "ffdhe4096", 00:25:21.677 "ffdhe6144", 00:25:21.677 "ffdhe8192" 00:25:21.677 ] 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "bdev_nvme_set_hotplug", 00:25:21.677 "params": { 00:25:21.677 "period_us": 100000, 00:25:21.677 "enable": false 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "bdev_malloc_create", 00:25:21.677 "params": { 00:25:21.677 "name": "malloc0", 00:25:21.677 "num_blocks": 8192, 00:25:21.677 "block_size": 4096, 00:25:21.677 "physical_block_size": 4096, 00:25:21.677 "uuid": "63479e1e-bdce-4c25-b11f-33e706029b5d", 00:25:21.677 "optimal_io_boundary": 0, 00:25:21.677 "md_size": 0, 00:25:21.677 "dif_type": 0, 00:25:21.677 "dif_is_head_of_md": false, 00:25:21.677 "dif_pi_format": 0 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "bdev_wait_for_examine" 00:25:21.677 } 00:25:21.677 ] 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "subsystem": "nbd", 00:25:21.677 "config": [] 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "subsystem": "scheduler", 00:25:21.677 "config": [ 00:25:21.677 { 00:25:21.677 "method": "framework_set_scheduler", 00:25:21.677 "params": { 00:25:21.677 "name": "static" 00:25:21.677 } 00:25:21.677 } 00:25:21.677 ] 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "subsystem": "nvmf", 00:25:21.677 "config": [ 00:25:21.677 { 00:25:21.677 "method": "nvmf_set_config", 00:25:21.677 "params": { 00:25:21.677 "discovery_filter": "match_any", 00:25:21.677 "admin_cmd_passthru": { 00:25:21.677 "identify_ctrlr": false 00:25:21.677 } 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "nvmf_set_max_subsystems", 00:25:21.677 "params": { 00:25:21.677 "max_subsystems": 1024 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "nvmf_set_crdt", 00:25:21.677 "params": { 00:25:21.677 "crdt1": 0, 00:25:21.677 "crdt2": 0, 00:25:21.677 "crdt3": 0 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "nvmf_create_transport", 00:25:21.677 "params": { 00:25:21.677 "trtype": "TCP", 00:25:21.677 "max_queue_depth": 128, 00:25:21.677 "max_io_qpairs_per_ctrlr": 127, 00:25:21.677 "in_capsule_data_size": 4096, 00:25:21.677 "max_io_size": 131072, 00:25:21.677 "io_unit_size": 131072, 00:25:21.677 "max_aq_depth": 128, 00:25:21.677 "num_shared_buffers": 511, 00:25:21.677 "buf_cache_size": 4294967295, 00:25:21.677 "dif_insert_or_strip": false, 00:25:21.677 "zcopy": false, 00:25:21.677 "c2h_success": false, 00:25:21.677 "sock_priority": 0, 00:25:21.677 "abort_timeout_sec": 1, 00:25:21.677 "ack_timeout": 0, 00:25:21.677 "data_wr_pool_size": 0 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "nvmf_create_subsystem", 00:25:21.677 "params": { 00:25:21.677 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.677 "allow_any_host": false, 00:25:21.677 "serial_number": "SPDK00000000000001", 00:25:21.677 "model_number": "SPDK bdev Controller", 00:25:21.677 "max_namespaces": 10, 00:25:21.677 "min_cntlid": 1, 00:25:21.677 "max_cntlid": 65519, 00:25:21.677 "ana_reporting": false 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "nvmf_subsystem_add_host", 00:25:21.677 "params": { 00:25:21.677 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.677 "host": "nqn.2016-06.io.spdk:host1", 00:25:21.677 "psk": "/tmp/tmp.IMCPSUYoQS" 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "nvmf_subsystem_add_ns", 00:25:21.677 "params": { 00:25:21.677 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.677 "namespace": { 00:25:21.677 "nsid": 1, 00:25:21.677 "bdev_name": "malloc0", 00:25:21.677 "nguid": "63479E1EBDCE4C25B11F33E706029B5D", 00:25:21.677 "uuid": "63479e1e-bdce-4c25-b11f-33e706029b5d", 00:25:21.677 "no_auto_visible": false 00:25:21.677 } 00:25:21.677 } 00:25:21.677 }, 00:25:21.677 { 00:25:21.677 "method": "nvmf_subsystem_add_listener", 00:25:21.677 "params": { 00:25:21.677 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.677 "listen_address": { 00:25:21.677 "trtype": "TCP", 00:25:21.677 "adrfam": "IPv4", 00:25:21.677 "traddr": "10.0.0.2", 00:25:21.677 "trsvcid": "4420" 00:25:21.677 }, 00:25:21.677 "secure_channel": true 00:25:21.677 } 00:25:21.677 } 00:25:21.677 ] 00:25:21.677 } 00:25:21.677 ] 00:25:21.677 }' 00:25:21.677 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2968255 00:25:21.677 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2968255 00:25:21.677 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:21.677 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2968255 ']' 00:25:21.677 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.677 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.677 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.677 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.677 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.938 [2024-07-22 19:29:40.693676] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:21.938 [2024-07-22 19:29:40.693787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.938 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.938 [2024-07-22 19:29:40.831297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.199 [2024-07-22 19:29:40.972856] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.199 [2024-07-22 19:29:40.972894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.199 [2024-07-22 19:29:40.972904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.199 [2024-07-22 19:29:40.972910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.199 [2024-07-22 19:29:40.972919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.199 [2024-07-22 19:29:40.972986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.460 [2024-07-22 19:29:41.315950] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.460 [2024-07-22 19:29:41.331917] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:22.460 [2024-07-22 19:29:41.347974] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:22.460 [2024-07-22 19:29:41.348174] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2968288 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2968288 /var/tmp/bdevperf.sock 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2968288 ']' 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:22.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:22.721 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:25:22.721 "subsystems": [ 00:25:22.721 { 00:25:22.721 "subsystem": "keyring", 00:25:22.721 "config": [] 00:25:22.721 }, 00:25:22.721 { 00:25:22.721 "subsystem": "iobuf", 00:25:22.721 "config": [ 00:25:22.721 { 00:25:22.721 "method": "iobuf_set_options", 00:25:22.721 "params": { 00:25:22.721 "small_pool_count": 8192, 00:25:22.721 "large_pool_count": 1024, 00:25:22.721 "small_bufsize": 8192, 00:25:22.721 "large_bufsize": 135168 00:25:22.721 } 00:25:22.721 } 00:25:22.721 ] 00:25:22.721 }, 00:25:22.721 { 00:25:22.721 "subsystem": "sock", 00:25:22.721 "config": [ 00:25:22.721 { 00:25:22.721 "method": "sock_set_default_impl", 00:25:22.721 "params": { 00:25:22.721 "impl_name": "posix" 00:25:22.721 } 00:25:22.721 }, 00:25:22.721 { 00:25:22.721 "method": "sock_impl_set_options", 00:25:22.721 "params": { 00:25:22.721 "impl_name": "ssl", 00:25:22.721 "recv_buf_size": 4096, 00:25:22.721 "send_buf_size": 4096, 00:25:22.721 "enable_recv_pipe": true, 00:25:22.721 "enable_quickack": false, 00:25:22.721 "enable_placement_id": 0, 00:25:22.721 "enable_zerocopy_send_server": true, 00:25:22.721 "enable_zerocopy_send_client": false, 00:25:22.721 "zerocopy_threshold": 0, 00:25:22.721 "tls_version": 0, 00:25:22.721 "enable_ktls": false 00:25:22.721 } 00:25:22.721 }, 00:25:22.721 { 00:25:22.721 "method": "sock_impl_set_options", 00:25:22.721 "params": { 00:25:22.721 "impl_name": "posix", 00:25:22.721 "recv_buf_size": 2097152, 00:25:22.721 "send_buf_size": 2097152, 00:25:22.721 "enable_recv_pipe": true, 00:25:22.721 "enable_quickack": false, 00:25:22.721 "enable_placement_id": 0, 00:25:22.721 "enable_zerocopy_send_server": true, 00:25:22.721 "enable_zerocopy_send_client": false, 00:25:22.721 "zerocopy_threshold": 0, 00:25:22.721 "tls_version": 0, 00:25:22.721 "enable_ktls": false 00:25:22.721 } 00:25:22.721 } 00:25:22.721 ] 00:25:22.721 }, 00:25:22.721 { 00:25:22.721 "subsystem": "vmd", 00:25:22.721 "config": [] 00:25:22.721 }, 00:25:22.721 { 00:25:22.721 "subsystem": "accel", 00:25:22.721 "config": [ 00:25:22.721 { 00:25:22.721 "method": "accel_set_options", 00:25:22.721 "params": { 00:25:22.721 "small_cache_size": 128, 00:25:22.721 "large_cache_size": 16, 00:25:22.721 "task_count": 2048, 00:25:22.721 "sequence_count": 2048, 00:25:22.721 "buf_count": 2048 00:25:22.721 } 00:25:22.721 } 00:25:22.721 ] 00:25:22.721 }, 00:25:22.721 { 00:25:22.721 "subsystem": "bdev", 00:25:22.721 "config": [ 00:25:22.721 { 00:25:22.721 "method": "bdev_set_options", 00:25:22.721 "params": { 00:25:22.721 "bdev_io_pool_size": 65535, 00:25:22.721 "bdev_io_cache_size": 256, 00:25:22.721 "bdev_auto_examine": true, 00:25:22.721 "iobuf_small_cache_size": 128, 00:25:22.721 "iobuf_large_cache_size": 16 00:25:22.721 } 00:25:22.721 }, 00:25:22.721 { 00:25:22.721 "method": "bdev_raid_set_options", 00:25:22.721 "params": { 00:25:22.721 "process_window_size_kb": 1024, 00:25:22.721 "process_max_bandwidth_mb_sec": 0 00:25:22.721 } 00:25:22.721 }, 00:25:22.721 { 00:25:22.721 "method": "bdev_iscsi_set_options", 00:25:22.722 "params": { 00:25:22.722 "timeout_sec": 30 00:25:22.722 } 00:25:22.722 }, 00:25:22.722 { 00:25:22.722 "method": "bdev_nvme_set_options", 00:25:22.722 "params": { 00:25:22.722 "action_on_timeout": "none", 00:25:22.722 "timeout_us": 0, 00:25:22.722 "timeout_admin_us": 0, 00:25:22.722 "keep_alive_timeout_ms": 10000, 00:25:22.722 "arbitration_burst": 0, 00:25:22.722 "low_priority_weight": 0, 00:25:22.722 "medium_priority_weight": 0, 00:25:22.722 "high_priority_weight": 0, 00:25:22.722 "nvme_adminq_poll_period_us": 10000, 00:25:22.722 "nvme_ioq_poll_period_us": 0, 00:25:22.722 "io_queue_requests": 512, 00:25:22.722 "delay_cmd_submit": true, 00:25:22.722 "transport_retry_count": 4, 00:25:22.722 "bdev_retry_count": 3, 00:25:22.722 "transport_ack_timeout": 0, 00:25:22.722 "ctrlr_loss_timeout_sec": 0, 00:25:22.722 "reconnect_delay_sec": 0, 00:25:22.722 "fast_io_fail_timeout_sec": 0, 00:25:22.722 "disable_auto_failback": false, 00:25:22.722 "generate_uuids": false, 00:25:22.722 "transport_tos": 0, 00:25:22.722 "nvme_error_stat": false, 00:25:22.722 "rdma_srq_size": 0, 00:25:22.722 "io_path_stat": false, 00:25:22.722 "allow_accel_sequence": false, 00:25:22.722 "rdma_max_cq_size": 0, 00:25:22.722 "rdma_cm_event_timeout_ms": 0, 00:25:22.722 "dhchap_digests": [ 00:25:22.722 "sha256", 00:25:22.722 "sha384", 00:25:22.722 "sha512" 00:25:22.722 ], 00:25:22.722 "dhchap_dhgroups": [ 00:25:22.722 "null", 00:25:22.722 "ffdhe2048", 00:25:22.722 "ffdhe3072", 00:25:22.722 "ffdhe4096", 00:25:22.722 "ffdhe6144", 00:25:22.722 "ffdhe8192" 00:25:22.722 ] 00:25:22.722 } 00:25:22.722 }, 00:25:22.722 { 00:25:22.722 "method": "bdev_nvme_attach_controller", 00:25:22.722 "params": { 00:25:22.722 "name": "TLSTEST", 00:25:22.722 "trtype": "TCP", 00:25:22.722 "adrfam": "IPv4", 00:25:22.722 "traddr": "10.0.0.2", 00:25:22.722 "trsvcid": "4420", 00:25:22.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.722 "prchk_reftag": false, 00:25:22.722 "prchk_guard": false, 00:25:22.722 "ctrlr_loss_timeout_sec": 0, 00:25:22.722 "reconnect_delay_sec": 0, 00:25:22.722 "fast_io_fail_timeout_sec": 0, 00:25:22.722 "psk": "/tmp/tmp.IMCPSUYoQS", 00:25:22.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:22.722 "hdgst": false, 00:25:22.722 "ddgst": false 00:25:22.722 } 00:25:22.722 }, 00:25:22.722 { 00:25:22.722 "method": "bdev_nvme_set_hotplug", 00:25:22.722 "params": { 00:25:22.722 "period_us": 100000, 00:25:22.722 "enable": false 00:25:22.722 } 00:25:22.722 }, 00:25:22.722 { 00:25:22.722 "method": "bdev_wait_for_examine" 00:25:22.722 } 00:25:22.722 ] 00:25:22.722 }, 00:25:22.722 { 00:25:22.722 "subsystem": "nbd", 00:25:22.722 "config": [] 00:25:22.722 } 00:25:22.722 ] 00:25:22.722 }' 00:25:22.722 [2024-07-22 19:29:41.531994] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:22.722 [2024-07-22 19:29:41.532102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2968288 ] 00:25:22.722 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.722 [2024-07-22 19:29:41.631952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.983 [2024-07-22 19:29:41.765730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.244 [2024-07-22 19:29:42.005535] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:23.244 [2024-07-22 19:29:42.005641] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:23.504 19:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.504 19:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:23.504 19:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:23.505 Running I/O for 10 seconds... 00:25:33.508 00:25:33.509 Latency(us) 00:25:33.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.509 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:33.509 Verification LBA range: start 0x0 length 0x2000 00:25:33.509 TLSTESTn1 : 10.03 3440.68 13.44 0.00 0.00 37133.61 6990.51 92624.21 00:25:33.509 =================================================================================================================== 00:25:33.509 Total : 3440.68 13.44 0.00 0.00 37133.61 6990.51 92624.21 00:25:33.509 0 00:25:33.509 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:33.509 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2968288 00:25:33.509 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2968288 ']' 00:25:33.509 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2968288 00:25:33.509 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:33.509 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:33.509 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2968288 00:25:33.774 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:33.774 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:33.774 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2968288' 00:25:33.774 killing process with pid 2968288 00:25:33.774 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2968288 00:25:33.774 Received shutdown signal, test time was about 10.000000 seconds 00:25:33.774 00:25:33.774 Latency(us) 00:25:33.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.774 =================================================================================================================== 00:25:33.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.774 [2024-07-22 19:29:52.492407] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:33.774 19:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2968288 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2968255 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2968255 ']' 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2968255 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2968255 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2968255' 00:25:34.346 killing process with pid 2968255 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2968255 00:25:34.346 [2024-07-22 19:29:53.068665] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:34.346 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2968255 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2970636 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2970636 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2970636 ']' 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.917 19:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:34.917 [2024-07-22 19:29:53.846417] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:34.917 [2024-07-22 19:29:53.846518] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.178 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.178 [2024-07-22 19:29:53.970571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.439 [2024-07-22 19:29:54.149290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.439 [2024-07-22 19:29:54.149338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.439 [2024-07-22 19:29:54.149352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.439 [2024-07-22 19:29:54.149361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.439 [2024-07-22 19:29:54.149374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.440 [2024-07-22 19:29:54.149403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.700 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:35.700 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:35.700 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:35.700 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:35.700 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.700 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.700 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.IMCPSUYoQS 00:25:35.700 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMCPSUYoQS 00:25:35.700 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:35.961 [2024-07-22 19:29:54.766980] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.961 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:36.222 19:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:36.222 [2024-07-22 19:29:55.075793] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:36.222 [2024-07-22 19:29:55.076049] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.222 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:36.483 malloc0 00:25:36.483 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMCPSUYoQS 00:25:36.744 [2024-07-22 19:29:55.563540] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2970998 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2970998 /var/tmp/bdevperf.sock 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2970998 ']' 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.744 19:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.744 [2024-07-22 19:29:55.664264] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:36.744 [2024-07-22 19:29:55.664375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2970998 ] 00:25:37.005 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.005 [2024-07-22 19:29:55.789252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.005 [2024-07-22 19:29:55.924354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.577 19:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.577 19:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:37.577 19:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IMCPSUYoQS 00:25:37.577 19:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:37.838 [2024-07-22 19:29:56.669139] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:37.838 nvme0n1 00:25:37.838 19:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:38.099 Running I/O for 1 seconds... 00:25:39.041 00:25:39.041 Latency(us) 00:25:39.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.041 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:39.041 Verification LBA range: start 0x0 length 0x2000 00:25:39.041 nvme0n1 : 1.06 3087.89 12.06 0.00 0.00 40358.54 5242.88 59856.21 00:25:39.041 =================================================================================================================== 00:25:39.041 Total : 3087.89 12.06 0.00 0.00 40358.54 5242.88 59856.21 00:25:39.041 0 00:25:39.041 19:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2970998 00:25:39.041 19:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2970998 ']' 00:25:39.041 19:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2970998 00:25:39.041 19:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:39.041 19:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:39.041 19:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2970998 00:25:39.302 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:39.302 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:39.302 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2970998' 00:25:39.302 killing process with pid 2970998 00:25:39.302 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2970998 00:25:39.302 Received shutdown signal, test time was about 1.000000 seconds 00:25:39.302 00:25:39.302 Latency(us) 00:25:39.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.302 =================================================================================================================== 00:25:39.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.302 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2970998 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2970636 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2970636 ']' 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2970636 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2970636 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2970636' 00:25:39.873 killing process with pid 2970636 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2970636 00:25:39.873 [2024-07-22 19:29:58.578644] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:39.873 19:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2970636 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2971826 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2971826 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2971826 ']' 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.816 19:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:40.816 [2024-07-22 19:29:59.617188] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:40.816 [2024-07-22 19:29:59.617310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.816 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.816 [2024-07-22 19:29:59.745607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.078 [2024-07-22 19:29:59.925677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.078 [2024-07-22 19:29:59.925724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.078 [2024-07-22 19:29:59.925736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.078 [2024-07-22 19:29:59.925745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.078 [2024-07-22 19:29:59.925757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.078 [2024-07-22 19:29:59.925788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:41.651 [2024-07-22 19:30:00.381409] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.651 malloc0 00:25:41.651 [2024-07-22 19:30:00.442418] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:41.651 [2024-07-22 19:30:00.442656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2972060 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2972060 /var/tmp/bdevperf.sock 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2972060 ']' 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.651 19:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:41.651 [2024-07-22 19:30:00.546069] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:41.651 [2024-07-22 19:30:00.546181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2972060 ] 00:25:41.912 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.912 [2024-07-22 19:30:00.667498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.912 [2024-07-22 19:30:00.803633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.484 19:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.484 19:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:42.484 19:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IMCPSUYoQS 00:25:42.745 19:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:42.745 [2024-07-22 19:30:01.579977] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:42.745 nvme0n1 00:25:42.745 19:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:43.006 Running I/O for 1 seconds... 00:25:43.948 00:25:43.948 Latency(us) 00:25:43.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.948 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:43.948 Verification LBA range: start 0x0 length 0x2000 00:25:43.948 nvme0n1 : 1.06 2410.33 9.42 0.00 0.00 51711.64 6853.97 74274.13 00:25:43.948 =================================================================================================================== 00:25:43.948 Total : 2410.33 9.42 0.00 0.00 51711.64 6853.97 74274.13 00:25:43.948 0 00:25:43.948 19:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:25:43.948 19:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.948 19:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:44.209 19:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.209 19:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:25:44.209 "subsystems": [ 00:25:44.209 { 00:25:44.209 "subsystem": "keyring", 00:25:44.209 "config": [ 00:25:44.209 { 00:25:44.209 "method": "keyring_file_add_key", 00:25:44.209 "params": { 00:25:44.209 "name": "key0", 00:25:44.209 "path": "/tmp/tmp.IMCPSUYoQS" 00:25:44.209 } 00:25:44.209 } 00:25:44.209 ] 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "subsystem": "iobuf", 00:25:44.209 "config": [ 00:25:44.209 { 00:25:44.209 "method": "iobuf_set_options", 00:25:44.209 "params": { 00:25:44.209 "small_pool_count": 8192, 00:25:44.209 "large_pool_count": 1024, 00:25:44.209 "small_bufsize": 8192, 00:25:44.209 "large_bufsize": 135168 00:25:44.209 } 00:25:44.209 } 00:25:44.209 ] 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "subsystem": "sock", 00:25:44.209 "config": [ 00:25:44.209 { 00:25:44.209 "method": "sock_set_default_impl", 00:25:44.209 "params": { 00:25:44.209 "impl_name": "posix" 00:25:44.209 } 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "method": "sock_impl_set_options", 00:25:44.209 "params": { 00:25:44.209 "impl_name": "ssl", 00:25:44.209 "recv_buf_size": 4096, 00:25:44.209 "send_buf_size": 4096, 00:25:44.209 "enable_recv_pipe": true, 00:25:44.209 "enable_quickack": false, 00:25:44.209 "enable_placement_id": 0, 00:25:44.209 "enable_zerocopy_send_server": true, 00:25:44.209 "enable_zerocopy_send_client": false, 00:25:44.209 "zerocopy_threshold": 0, 00:25:44.209 "tls_version": 0, 00:25:44.209 "enable_ktls": false 00:25:44.209 } 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "method": "sock_impl_set_options", 00:25:44.209 "params": { 00:25:44.209 "impl_name": "posix", 00:25:44.209 "recv_buf_size": 2097152, 00:25:44.209 "send_buf_size": 2097152, 00:25:44.209 "enable_recv_pipe": true, 00:25:44.209 "enable_quickack": false, 00:25:44.209 "enable_placement_id": 0, 00:25:44.209 "enable_zerocopy_send_server": true, 00:25:44.209 "enable_zerocopy_send_client": false, 00:25:44.209 "zerocopy_threshold": 0, 00:25:44.209 "tls_version": 0, 00:25:44.209 "enable_ktls": false 00:25:44.209 } 00:25:44.209 } 00:25:44.209 ] 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "subsystem": "vmd", 00:25:44.209 "config": [] 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "subsystem": "accel", 00:25:44.209 "config": [ 00:25:44.209 { 00:25:44.209 "method": "accel_set_options", 00:25:44.209 "params": { 00:25:44.209 "small_cache_size": 128, 00:25:44.209 "large_cache_size": 16, 00:25:44.209 "task_count": 2048, 00:25:44.209 "sequence_count": 2048, 00:25:44.209 "buf_count": 2048 00:25:44.209 } 00:25:44.209 } 00:25:44.209 ] 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "subsystem": "bdev", 00:25:44.209 "config": [ 00:25:44.209 { 00:25:44.209 "method": "bdev_set_options", 00:25:44.209 "params": { 00:25:44.209 "bdev_io_pool_size": 65535, 00:25:44.209 "bdev_io_cache_size": 256, 00:25:44.209 "bdev_auto_examine": true, 00:25:44.209 "iobuf_small_cache_size": 128, 00:25:44.209 "iobuf_large_cache_size": 16 00:25:44.209 } 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "method": "bdev_raid_set_options", 00:25:44.209 "params": { 00:25:44.209 "process_window_size_kb": 1024, 00:25:44.209 "process_max_bandwidth_mb_sec": 0 00:25:44.209 } 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "method": "bdev_iscsi_set_options", 00:25:44.209 "params": { 00:25:44.209 "timeout_sec": 30 00:25:44.209 } 00:25:44.209 }, 00:25:44.209 { 00:25:44.209 "method": "bdev_nvme_set_options", 00:25:44.209 "params": { 00:25:44.209 "action_on_timeout": "none", 00:25:44.209 "timeout_us": 0, 00:25:44.209 "timeout_admin_us": 0, 00:25:44.210 "keep_alive_timeout_ms": 10000, 00:25:44.210 "arbitration_burst": 0, 00:25:44.210 "low_priority_weight": 0, 00:25:44.210 "medium_priority_weight": 0, 00:25:44.210 "high_priority_weight": 0, 00:25:44.210 "nvme_adminq_poll_period_us": 10000, 00:25:44.210 "nvme_ioq_poll_period_us": 0, 00:25:44.210 "io_queue_requests": 0, 00:25:44.210 "delay_cmd_submit": true, 00:25:44.210 "transport_retry_count": 4, 00:25:44.210 "bdev_retry_count": 3, 00:25:44.210 "transport_ack_timeout": 0, 00:25:44.210 "ctrlr_loss_timeout_sec": 0, 00:25:44.210 "reconnect_delay_sec": 0, 00:25:44.210 "fast_io_fail_timeout_sec": 0, 00:25:44.210 "disable_auto_failback": false, 00:25:44.210 "generate_uuids": false, 00:25:44.210 "transport_tos": 0, 00:25:44.210 "nvme_error_stat": false, 00:25:44.210 "rdma_srq_size": 0, 00:25:44.210 "io_path_stat": false, 00:25:44.210 "allow_accel_sequence": false, 00:25:44.210 "rdma_max_cq_size": 0, 00:25:44.210 "rdma_cm_event_timeout_ms": 0, 00:25:44.210 "dhchap_digests": [ 00:25:44.210 "sha256", 00:25:44.210 "sha384", 00:25:44.210 "sha512" 00:25:44.210 ], 00:25:44.210 "dhchap_dhgroups": [ 00:25:44.210 "null", 00:25:44.210 "ffdhe2048", 00:25:44.210 "ffdhe3072", 00:25:44.210 "ffdhe4096", 00:25:44.210 "ffdhe6144", 00:25:44.210 "ffdhe8192" 00:25:44.210 ] 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "bdev_nvme_set_hotplug", 00:25:44.210 "params": { 00:25:44.210 "period_us": 100000, 00:25:44.210 "enable": false 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "bdev_malloc_create", 00:25:44.210 "params": { 00:25:44.210 "name": "malloc0", 00:25:44.210 "num_blocks": 8192, 00:25:44.210 "block_size": 4096, 00:25:44.210 "physical_block_size": 4096, 00:25:44.210 "uuid": "4b6936d7-1a60-4b95-b463-ccd94144624d", 00:25:44.210 "optimal_io_boundary": 0, 00:25:44.210 "md_size": 0, 00:25:44.210 "dif_type": 0, 00:25:44.210 "dif_is_head_of_md": false, 00:25:44.210 "dif_pi_format": 0 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "bdev_wait_for_examine" 00:25:44.210 } 00:25:44.210 ] 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "subsystem": "nbd", 00:25:44.210 "config": [] 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "subsystem": "scheduler", 00:25:44.210 "config": [ 00:25:44.210 { 00:25:44.210 "method": "framework_set_scheduler", 00:25:44.210 "params": { 00:25:44.210 "name": "static" 00:25:44.210 } 00:25:44.210 } 00:25:44.210 ] 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "subsystem": "nvmf", 00:25:44.210 "config": [ 00:25:44.210 { 00:25:44.210 "method": "nvmf_set_config", 00:25:44.210 "params": { 00:25:44.210 "discovery_filter": "match_any", 00:25:44.210 "admin_cmd_passthru": { 00:25:44.210 "identify_ctrlr": false 00:25:44.210 } 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "nvmf_set_max_subsystems", 00:25:44.210 "params": { 00:25:44.210 "max_subsystems": 1024 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "nvmf_set_crdt", 00:25:44.210 "params": { 00:25:44.210 "crdt1": 0, 00:25:44.210 "crdt2": 0, 00:25:44.210 "crdt3": 0 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "nvmf_create_transport", 00:25:44.210 "params": { 00:25:44.210 "trtype": "TCP", 00:25:44.210 "max_queue_depth": 128, 00:25:44.210 "max_io_qpairs_per_ctrlr": 127, 00:25:44.210 "in_capsule_data_size": 4096, 00:25:44.210 "max_io_size": 131072, 00:25:44.210 "io_unit_size": 131072, 00:25:44.210 "max_aq_depth": 128, 00:25:44.210 "num_shared_buffers": 511, 00:25:44.210 "buf_cache_size": 4294967295, 00:25:44.210 "dif_insert_or_strip": false, 00:25:44.210 "zcopy": false, 00:25:44.210 "c2h_success": false, 00:25:44.210 "sock_priority": 0, 00:25:44.210 "abort_timeout_sec": 1, 00:25:44.210 "ack_timeout": 0, 00:25:44.210 "data_wr_pool_size": 0 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "nvmf_create_subsystem", 00:25:44.210 "params": { 00:25:44.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.210 "allow_any_host": false, 00:25:44.210 "serial_number": "00000000000000000000", 00:25:44.210 "model_number": "SPDK bdev Controller", 00:25:44.210 "max_namespaces": 32, 00:25:44.210 "min_cntlid": 1, 00:25:44.210 "max_cntlid": 65519, 00:25:44.210 "ana_reporting": false 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "nvmf_subsystem_add_host", 00:25:44.210 "params": { 00:25:44.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.210 "host": "nqn.2016-06.io.spdk:host1", 00:25:44.210 "psk": "key0" 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "nvmf_subsystem_add_ns", 00:25:44.210 "params": { 00:25:44.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.210 "namespace": { 00:25:44.210 "nsid": 1, 00:25:44.210 "bdev_name": "malloc0", 00:25:44.210 "nguid": "4B6936D71A604B95B463CCD94144624D", 00:25:44.210 "uuid": "4b6936d7-1a60-4b95-b463-ccd94144624d", 00:25:44.210 "no_auto_visible": false 00:25:44.210 } 00:25:44.210 } 00:25:44.210 }, 00:25:44.210 { 00:25:44.210 "method": "nvmf_subsystem_add_listener", 00:25:44.210 "params": { 00:25:44.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.210 "listen_address": { 00:25:44.210 "trtype": "TCP", 00:25:44.210 "adrfam": "IPv4", 00:25:44.210 "traddr": "10.0.0.2", 00:25:44.210 "trsvcid": "4420" 00:25:44.210 }, 00:25:44.210 "secure_channel": false, 00:25:44.210 "sock_impl": "ssl" 00:25:44.210 } 00:25:44.210 } 00:25:44.210 ] 00:25:44.210 } 00:25:44.210 ] 00:25:44.210 }' 00:25:44.210 19:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:44.471 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:25:44.471 "subsystems": [ 00:25:44.471 { 00:25:44.471 "subsystem": "keyring", 00:25:44.471 "config": [ 00:25:44.471 { 00:25:44.471 "method": "keyring_file_add_key", 00:25:44.471 "params": { 00:25:44.471 "name": "key0", 00:25:44.471 "path": "/tmp/tmp.IMCPSUYoQS" 00:25:44.471 } 00:25:44.471 } 00:25:44.471 ] 00:25:44.471 }, 00:25:44.471 { 00:25:44.471 "subsystem": "iobuf", 00:25:44.471 "config": [ 00:25:44.471 { 00:25:44.471 "method": "iobuf_set_options", 00:25:44.472 "params": { 00:25:44.472 "small_pool_count": 8192, 00:25:44.472 "large_pool_count": 1024, 00:25:44.472 "small_bufsize": 8192, 00:25:44.472 "large_bufsize": 135168 00:25:44.472 } 00:25:44.472 } 00:25:44.472 ] 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "subsystem": "sock", 00:25:44.472 "config": [ 00:25:44.472 { 00:25:44.472 "method": "sock_set_default_impl", 00:25:44.472 "params": { 00:25:44.472 "impl_name": "posix" 00:25:44.472 } 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "method": "sock_impl_set_options", 00:25:44.472 "params": { 00:25:44.472 "impl_name": "ssl", 00:25:44.472 "recv_buf_size": 4096, 00:25:44.472 "send_buf_size": 4096, 00:25:44.472 "enable_recv_pipe": true, 00:25:44.472 "enable_quickack": false, 00:25:44.472 "enable_placement_id": 0, 00:25:44.472 "enable_zerocopy_send_server": true, 00:25:44.472 "enable_zerocopy_send_client": false, 00:25:44.472 "zerocopy_threshold": 0, 00:25:44.472 "tls_version": 0, 00:25:44.472 "enable_ktls": false 00:25:44.472 } 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "method": "sock_impl_set_options", 00:25:44.472 "params": { 00:25:44.472 "impl_name": "posix", 00:25:44.472 "recv_buf_size": 2097152, 00:25:44.472 "send_buf_size": 2097152, 00:25:44.472 "enable_recv_pipe": true, 00:25:44.472 "enable_quickack": false, 00:25:44.472 "enable_placement_id": 0, 00:25:44.472 "enable_zerocopy_send_server": true, 00:25:44.472 "enable_zerocopy_send_client": false, 00:25:44.472 "zerocopy_threshold": 0, 00:25:44.472 "tls_version": 0, 00:25:44.472 "enable_ktls": false 00:25:44.472 } 00:25:44.472 } 00:25:44.472 ] 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "subsystem": "vmd", 00:25:44.472 "config": [] 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "subsystem": "accel", 00:25:44.472 "config": [ 00:25:44.472 { 00:25:44.472 "method": "accel_set_options", 00:25:44.472 "params": { 00:25:44.472 "small_cache_size": 128, 00:25:44.472 "large_cache_size": 16, 00:25:44.472 "task_count": 2048, 00:25:44.472 "sequence_count": 2048, 00:25:44.472 "buf_count": 2048 00:25:44.472 } 00:25:44.472 } 00:25:44.472 ] 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "subsystem": "bdev", 00:25:44.472 "config": [ 00:25:44.472 { 00:25:44.472 "method": "bdev_set_options", 00:25:44.472 "params": { 00:25:44.472 "bdev_io_pool_size": 65535, 00:25:44.472 "bdev_io_cache_size": 256, 00:25:44.472 "bdev_auto_examine": true, 00:25:44.472 "iobuf_small_cache_size": 128, 00:25:44.472 "iobuf_large_cache_size": 16 00:25:44.472 } 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "method": "bdev_raid_set_options", 00:25:44.472 "params": { 00:25:44.472 "process_window_size_kb": 1024, 00:25:44.472 "process_max_bandwidth_mb_sec": 0 00:25:44.472 } 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "method": "bdev_iscsi_set_options", 00:25:44.472 "params": { 00:25:44.472 "timeout_sec": 30 00:25:44.472 } 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "method": "bdev_nvme_set_options", 00:25:44.472 "params": { 00:25:44.472 "action_on_timeout": "none", 00:25:44.472 "timeout_us": 0, 00:25:44.472 "timeout_admin_us": 0, 00:25:44.472 "keep_alive_timeout_ms": 10000, 00:25:44.472 "arbitration_burst": 0, 00:25:44.472 "low_priority_weight": 0, 00:25:44.472 "medium_priority_weight": 0, 00:25:44.472 "high_priority_weight": 0, 00:25:44.472 "nvme_adminq_poll_period_us": 10000, 00:25:44.472 "nvme_ioq_poll_period_us": 0, 00:25:44.472 "io_queue_requests": 512, 00:25:44.472 "delay_cmd_submit": true, 00:25:44.472 "transport_retry_count": 4, 00:25:44.472 "bdev_retry_count": 3, 00:25:44.472 "transport_ack_timeout": 0, 00:25:44.472 "ctrlr_loss_timeout_sec": 0, 00:25:44.472 "reconnect_delay_sec": 0, 00:25:44.472 "fast_io_fail_timeout_sec": 0, 00:25:44.472 "disable_auto_failback": false, 00:25:44.472 "generate_uuids": false, 00:25:44.472 "transport_tos": 0, 00:25:44.472 "nvme_error_stat": false, 00:25:44.472 "rdma_srq_size": 0, 00:25:44.472 "io_path_stat": false, 00:25:44.472 "allow_accel_sequence": false, 00:25:44.472 "rdma_max_cq_size": 0, 00:25:44.472 "rdma_cm_event_timeout_ms": 0, 00:25:44.472 "dhchap_digests": [ 00:25:44.472 "sha256", 00:25:44.472 "sha384", 00:25:44.472 "sha512" 00:25:44.472 ], 00:25:44.472 "dhchap_dhgroups": [ 00:25:44.472 "null", 00:25:44.472 "ffdhe2048", 00:25:44.472 "ffdhe3072", 00:25:44.472 "ffdhe4096", 00:25:44.472 "ffdhe6144", 00:25:44.472 "ffdhe8192" 00:25:44.472 ] 00:25:44.472 } 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "method": "bdev_nvme_attach_controller", 00:25:44.472 "params": { 00:25:44.472 "name": "nvme0", 00:25:44.472 "trtype": "TCP", 00:25:44.472 "adrfam": "IPv4", 00:25:44.472 "traddr": "10.0.0.2", 00:25:44.472 "trsvcid": "4420", 00:25:44.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.472 "prchk_reftag": false, 00:25:44.472 "prchk_guard": false, 00:25:44.472 "ctrlr_loss_timeout_sec": 0, 00:25:44.472 "reconnect_delay_sec": 0, 00:25:44.472 "fast_io_fail_timeout_sec": 0, 00:25:44.472 "psk": "key0", 00:25:44.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:44.472 "hdgst": false, 00:25:44.472 "ddgst": false 00:25:44.472 } 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "method": "bdev_nvme_set_hotplug", 00:25:44.472 "params": { 00:25:44.472 "period_us": 100000, 00:25:44.472 "enable": false 00:25:44.472 } 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "method": "bdev_enable_histogram", 00:25:44.472 "params": { 00:25:44.472 "name": "nvme0n1", 00:25:44.472 "enable": true 00:25:44.472 } 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "method": "bdev_wait_for_examine" 00:25:44.472 } 00:25:44.472 ] 00:25:44.472 }, 00:25:44.472 { 00:25:44.472 "subsystem": "nbd", 00:25:44.472 "config": [] 00:25:44.472 } 00:25:44.472 ] 00:25:44.472 }' 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2972060 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2972060 ']' 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2972060 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2972060 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2972060' 00:25:44.472 killing process with pid 2972060 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2972060 00:25:44.472 Received shutdown signal, test time was about 1.000000 seconds 00:25:44.472 00:25:44.472 Latency(us) 00:25:44.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.472 =================================================================================================================== 00:25:44.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.472 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2972060 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2971826 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2971826 ']' 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2971826 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2971826 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2971826' 00:25:45.044 killing process with pid 2971826 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2971826 00:25:45.044 19:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2971826 00:25:46.051 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:25:46.051 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:46.051 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.051 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:25:46.051 "subsystems": [ 00:25:46.051 { 00:25:46.051 "subsystem": "keyring", 00:25:46.051 "config": [ 00:25:46.051 { 00:25:46.051 "method": "keyring_file_add_key", 00:25:46.051 "params": { 00:25:46.051 "name": "key0", 00:25:46.051 "path": "/tmp/tmp.IMCPSUYoQS" 00:25:46.051 } 00:25:46.051 } 00:25:46.051 ] 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "subsystem": "iobuf", 00:25:46.051 "config": [ 00:25:46.051 { 00:25:46.051 "method": "iobuf_set_options", 00:25:46.051 "params": { 00:25:46.051 "small_pool_count": 8192, 00:25:46.051 "large_pool_count": 1024, 00:25:46.051 "small_bufsize": 8192, 00:25:46.051 "large_bufsize": 135168 00:25:46.051 } 00:25:46.051 } 00:25:46.051 ] 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "subsystem": "sock", 00:25:46.051 "config": [ 00:25:46.051 { 00:25:46.051 "method": "sock_set_default_impl", 00:25:46.051 "params": { 00:25:46.051 "impl_name": "posix" 00:25:46.051 } 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "method": "sock_impl_set_options", 00:25:46.051 "params": { 00:25:46.051 "impl_name": "ssl", 00:25:46.051 "recv_buf_size": 4096, 00:25:46.051 "send_buf_size": 4096, 00:25:46.051 "enable_recv_pipe": true, 00:25:46.051 "enable_quickack": false, 00:25:46.051 "enable_placement_id": 0, 00:25:46.051 "enable_zerocopy_send_server": true, 00:25:46.051 "enable_zerocopy_send_client": false, 00:25:46.051 "zerocopy_threshold": 0, 00:25:46.051 "tls_version": 0, 00:25:46.051 "enable_ktls": false 00:25:46.051 } 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "method": "sock_impl_set_options", 00:25:46.051 "params": { 00:25:46.051 "impl_name": "posix", 00:25:46.051 "recv_buf_size": 2097152, 00:25:46.051 "send_buf_size": 2097152, 00:25:46.051 "enable_recv_pipe": true, 00:25:46.051 "enable_quickack": false, 00:25:46.051 "enable_placement_id": 0, 00:25:46.051 "enable_zerocopy_send_server": true, 00:25:46.051 "enable_zerocopy_send_client": false, 00:25:46.051 "zerocopy_threshold": 0, 00:25:46.051 "tls_version": 0, 00:25:46.051 "enable_ktls": false 00:25:46.051 } 00:25:46.051 } 00:25:46.051 ] 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "subsystem": "vmd", 00:25:46.051 "config": [] 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "subsystem": "accel", 00:25:46.051 "config": [ 00:25:46.051 { 00:25:46.051 "method": "accel_set_options", 00:25:46.051 "params": { 00:25:46.051 "small_cache_size": 128, 00:25:46.051 "large_cache_size": 16, 00:25:46.051 "task_count": 2048, 00:25:46.051 "sequence_count": 2048, 00:25:46.051 "buf_count": 2048 00:25:46.051 } 00:25:46.051 } 00:25:46.051 ] 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "subsystem": "bdev", 00:25:46.051 "config": [ 00:25:46.051 { 00:25:46.051 "method": "bdev_set_options", 00:25:46.051 "params": { 00:25:46.051 "bdev_io_pool_size": 65535, 00:25:46.051 "bdev_io_cache_size": 256, 00:25:46.051 "bdev_auto_examine": true, 00:25:46.051 "iobuf_small_cache_size": 128, 00:25:46.051 "iobuf_large_cache_size": 16 00:25:46.051 } 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "method": "bdev_raid_set_options", 00:25:46.051 "params": { 00:25:46.051 "process_window_size_kb": 1024, 00:25:46.051 "process_max_bandwidth_mb_sec": 0 00:25:46.051 } 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "method": "bdev_iscsi_set_options", 00:25:46.051 "params": { 00:25:46.051 "timeout_sec": 30 00:25:46.051 } 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "method": "bdev_nvme_set_options", 00:25:46.051 "params": { 00:25:46.051 "action_on_timeout": "none", 00:25:46.051 "timeout_us": 0, 00:25:46.051 "timeout_admin_us": 0, 00:25:46.051 "keep_alive_timeout_ms": 10000, 00:25:46.051 "arbitration_burst": 0, 00:25:46.051 "low_priority_weight": 0, 00:25:46.051 "medium_priority_weight": 0, 00:25:46.051 "high_priority_weight": 0, 00:25:46.051 "nvme_adminq_poll_period_us": 10000, 00:25:46.051 "nvme_ioq_poll_period_us": 0, 00:25:46.051 "io_queue_requests": 0, 00:25:46.051 "delay_cmd_submit": true, 00:25:46.051 "transport_retry_count": 4, 00:25:46.051 "bdev_retry_count": 3, 00:25:46.051 "transport_ack_timeout": 0, 00:25:46.051 "ctrlr_loss_timeout_sec": 0, 00:25:46.051 "reconnect_delay_sec": 0, 00:25:46.051 "fast_io_fail_timeout_sec": 0, 00:25:46.051 "disable_auto_failback": false, 00:25:46.051 "generate_uuids": false, 00:25:46.051 "transport_tos": 0, 00:25:46.051 "nvme_error_stat": false, 00:25:46.051 "rdma_srq_size": 0, 00:25:46.051 "io_path_stat": false, 00:25:46.051 "allow_accel_sequence": false, 00:25:46.051 "rdma_max_cq_size": 0, 00:25:46.051 "rdma_cm_event_timeout_ms": 0, 00:25:46.051 "dhchap_digests": [ 00:25:46.051 "sha256", 00:25:46.051 "sha384", 00:25:46.051 "sha512" 00:25:46.051 ], 00:25:46.051 "dhchap_dhgroups": [ 00:25:46.051 "null", 00:25:46.051 "ffdhe2048", 00:25:46.051 "ffdhe3072", 00:25:46.051 "ffdhe4096", 00:25:46.051 "ffdhe6144", 00:25:46.051 "ffdhe8192" 00:25:46.051 ] 00:25:46.051 } 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "method": "bdev_nvme_set_hotplug", 00:25:46.051 "params": { 00:25:46.051 "period_us": 100000, 00:25:46.051 "enable": false 00:25:46.051 } 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "method": "bdev_malloc_create", 00:25:46.051 "params": { 00:25:46.051 "name": "malloc0", 00:25:46.051 "num_blocks": 8192, 00:25:46.051 "block_size": 4096, 00:25:46.051 "physical_block_size": 4096, 00:25:46.051 "uuid": "4b6936d7-1a60-4b95-b463-ccd94144624d", 00:25:46.051 "optimal_io_boundary": 0, 00:25:46.051 "md_size": 0, 00:25:46.051 "dif_type": 0, 00:25:46.051 "dif_is_head_of_md": false, 00:25:46.051 "dif_pi_format": 0 00:25:46.051 } 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "method": "bdev_wait_for_examine" 00:25:46.051 } 00:25:46.051 ] 00:25:46.051 }, 00:25:46.051 { 00:25:46.051 "subsystem": "nbd", 00:25:46.052 "config": [] 00:25:46.052 }, 00:25:46.052 { 00:25:46.052 "subsystem": "scheduler", 00:25:46.052 "config": [ 00:25:46.052 { 00:25:46.052 "method": "framework_set_scheduler", 00:25:46.052 "params": { 00:25:46.052 "name": "static" 00:25:46.052 } 00:25:46.052 } 00:25:46.052 ] 00:25:46.052 }, 00:25:46.052 { 00:25:46.052 "subsystem": "nvmf", 00:25:46.052 "config": [ 00:25:46.052 { 00:25:46.052 "method": "nvmf_set_config", 00:25:46.052 "params": { 00:25:46.052 "discovery_filter": "match_any", 00:25:46.052 "admin_cmd_passthru": { 00:25:46.052 "identify_ctrlr": false 00:25:46.052 } 00:25:46.052 } 00:25:46.052 }, 00:25:46.052 { 00:25:46.052 "method": "nvmf_set_max_subsystems", 00:25:46.052 "params": { 00:25:46.052 "max_subsystems": 1024 00:25:46.052 } 00:25:46.052 }, 00:25:46.052 { 00:25:46.052 "method": "nvmf_set_crdt", 00:25:46.052 "params": { 00:25:46.052 "crdt1": 0, 00:25:46.052 "crdt2": 0, 00:25:46.052 "crdt3": 0 00:25:46.052 } 00:25:46.052 }, 00:25:46.052 { 00:25:46.052 "method": "nvmf_create_transport", 00:25:46.052 "params": { 00:25:46.052 "trtype": "TCP", 00:25:46.052 "max_queue_depth": 128, 00:25:46.052 "max_io_qpairs_per_ctrlr": 127, 00:25:46.052 "in_capsule_data_size": 4096, 00:25:46.052 "max_io_size": 131072, 00:25:46.052 "io_unit_size": 131072, 00:25:46.052 "max_aq_depth": 128, 00:25:46.052 "num_shared_buffers": 511, 00:25:46.052 "buf_cache_size": 4294967295, 00:25:46.052 "dif_insert_or_strip": false, 00:25:46.052 "zcopy": false, 00:25:46.052 "c2h_success": false, 00:25:46.052 "sock_priority": 0, 00:25:46.052 "abort_timeout_sec": 1, 00:25:46.052 "ack_timeout": 0, 00:25:46.052 "data_wr_pool_size": 0 00:25:46.052 } 00:25:46.052 }, 00:25:46.052 { 00:25:46.052 "method": "nvmf_create_subsystem", 00:25:46.052 "params": { 00:25:46.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.052 "allow_any_host": false, 00:25:46.052 "serial_number": "00000000000000000000", 00:25:46.052 "model_number": "SPDK bdev Controller", 00:25:46.052 "max_namespaces": 32, 00:25:46.052 "min_cntlid": 1, 00:25:46.052 "max_cntlid": 65519, 00:25:46.052 "ana_reporting": false 00:25:46.052 } 00:25:46.052 }, 00:25:46.052 { 00:25:46.052 "method": "nvmf_subsystem_add_host", 00:25:46.052 "params": { 00:25:46.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.052 "host": "nqn.2016-06.io.spdk:host1", 00:25:46.052 "psk": "key0" 00:25:46.052 } 00:25:46.052 }, 00:25:46.052 { 00:25:46.052 "method": "nvmf_subsystem_add_ns", 00:25:46.052 "params": { 00:25:46.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.052 "namespace": { 00:25:46.052 "nsid": 1, 00:25:46.052 "bdev_name": "malloc0", 00:25:46.052 "nguid": "4B6936D71A604B95B463CCD94144624D", 00:25:46.052 "uuid": "4b6936d7-1a60-4b95-b463-ccd94144624d", 00:25:46.052 "no_auto_visible": false 00:25:46.052 } 00:25:46.052 } 00:25:46.052 }, 00:25:46.052 { 00:25:46.052 "method": "nvmf_subsystem_add_listener", 00:25:46.052 "params": { 00:25:46.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.052 "listen_address": { 00:25:46.052 "trtype": "TCP", 00:25:46.052 "adrfam": "IPv4", 00:25:46.052 "traddr": "10.0.0.2", 00:25:46.052 "trsvcid": "4420" 00:25:46.052 }, 00:25:46.052 "secure_channel": false, 00:25:46.052 "sock_impl": "ssl" 00:25:46.052 } 00:25:46.052 } 00:25:46.052 ] 00:25:46.052 } 00:25:46.052 ] 00:25:46.052 }' 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2972955 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2972955 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2972955 ']' 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.052 19:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.052 [2024-07-22 19:30:04.840675] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:46.052 [2024-07-22 19:30:04.840790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.052 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.052 [2024-07-22 19:30:04.970321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.313 [2024-07-22 19:30:05.153221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.313 [2024-07-22 19:30:05.153269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.313 [2024-07-22 19:30:05.153282] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.313 [2024-07-22 19:30:05.153291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.313 [2024-07-22 19:30:05.153304] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.313 [2024-07-22 19:30:05.153389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.885 [2024-07-22 19:30:05.551116] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.885 [2024-07-22 19:30:05.583131] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:46.885 [2024-07-22 19:30:05.583397] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2973184 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2973184 /var/tmp/bdevperf.sock 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2973184 ']' 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.885 19:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:25:46.885 "subsystems": [ 00:25:46.885 { 00:25:46.885 "subsystem": "keyring", 00:25:46.885 "config": [ 00:25:46.885 { 00:25:46.885 "method": "keyring_file_add_key", 00:25:46.885 "params": { 00:25:46.885 "name": "key0", 00:25:46.885 "path": "/tmp/tmp.IMCPSUYoQS" 00:25:46.885 } 00:25:46.885 } 00:25:46.885 ] 00:25:46.885 }, 00:25:46.885 { 00:25:46.885 "subsystem": "iobuf", 00:25:46.885 "config": [ 00:25:46.885 { 00:25:46.885 "method": "iobuf_set_options", 00:25:46.885 "params": { 00:25:46.885 "small_pool_count": 8192, 00:25:46.885 "large_pool_count": 1024, 00:25:46.885 "small_bufsize": 8192, 00:25:46.885 "large_bufsize": 135168 00:25:46.885 } 00:25:46.885 } 00:25:46.885 ] 00:25:46.885 }, 00:25:46.885 { 00:25:46.885 "subsystem": "sock", 00:25:46.885 "config": [ 00:25:46.885 { 00:25:46.885 "method": "sock_set_default_impl", 00:25:46.885 "params": { 00:25:46.885 "impl_name": "posix" 00:25:46.885 } 00:25:46.885 }, 00:25:46.885 { 00:25:46.885 "method": "sock_impl_set_options", 00:25:46.885 "params": { 00:25:46.885 "impl_name": "ssl", 00:25:46.885 "recv_buf_size": 4096, 00:25:46.885 "send_buf_size": 4096, 00:25:46.885 "enable_recv_pipe": true, 00:25:46.885 "enable_quickack": false, 00:25:46.885 "enable_placement_id": 0, 00:25:46.885 "enable_zerocopy_send_server": true, 00:25:46.885 "enable_zerocopy_send_client": false, 00:25:46.885 "zerocopy_threshold": 0, 00:25:46.885 "tls_version": 0, 00:25:46.885 "enable_ktls": false 00:25:46.885 } 00:25:46.885 }, 00:25:46.885 { 00:25:46.885 "method": "sock_impl_set_options", 00:25:46.885 "params": { 00:25:46.886 "impl_name": "posix", 00:25:46.886 "recv_buf_size": 2097152, 00:25:46.886 "send_buf_size": 2097152, 00:25:46.886 "enable_recv_pipe": true, 00:25:46.886 "enable_quickack": false, 00:25:46.886 "enable_placement_id": 0, 00:25:46.886 "enable_zerocopy_send_server": true, 00:25:46.886 "enable_zerocopy_send_client": false, 00:25:46.886 "zerocopy_threshold": 0, 00:25:46.886 "tls_version": 0, 00:25:46.886 "enable_ktls": false 00:25:46.886 } 00:25:46.886 } 00:25:46.886 ] 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "subsystem": "vmd", 00:25:46.886 "config": [] 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "subsystem": "accel", 00:25:46.886 "config": [ 00:25:46.886 { 00:25:46.886 "method": "accel_set_options", 00:25:46.886 "params": { 00:25:46.886 "small_cache_size": 128, 00:25:46.886 "large_cache_size": 16, 00:25:46.886 "task_count": 2048, 00:25:46.886 "sequence_count": 2048, 00:25:46.886 "buf_count": 2048 00:25:46.886 } 00:25:46.886 } 00:25:46.886 ] 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "subsystem": "bdev", 00:25:46.886 "config": [ 00:25:46.886 { 00:25:46.886 "method": "bdev_set_options", 00:25:46.886 "params": { 00:25:46.886 "bdev_io_pool_size": 65535, 00:25:46.886 "bdev_io_cache_size": 256, 00:25:46.886 "bdev_auto_examine": true, 00:25:46.886 "iobuf_small_cache_size": 128, 00:25:46.886 "iobuf_large_cache_size": 16 00:25:46.886 } 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "method": "bdev_raid_set_options", 00:25:46.886 "params": { 00:25:46.886 "process_window_size_kb": 1024, 00:25:46.886 "process_max_bandwidth_mb_sec": 0 00:25:46.886 } 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "method": "bdev_iscsi_set_options", 00:25:46.886 "params": { 00:25:46.886 "timeout_sec": 30 00:25:46.886 } 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "method": "bdev_nvme_set_options", 00:25:46.886 "params": { 00:25:46.886 "action_on_timeout": "none", 00:25:46.886 "timeout_us": 0, 00:25:46.886 "timeout_admin_us": 0, 00:25:46.886 "keep_alive_timeout_ms": 10000, 00:25:46.886 "arbitration_burst": 0, 00:25:46.886 "low_priority_weight": 0, 00:25:46.886 "medium_priority_weight": 0, 00:25:46.886 "high_priority_weight": 0, 00:25:46.886 "nvme_adminq_poll_period_us": 10000, 00:25:46.886 "nvme_ioq_poll_period_us": 0, 00:25:46.886 "io_queue_requests": 512, 00:25:46.886 "delay_cmd_submit": true, 00:25:46.886 "transport_retry_count": 4, 00:25:46.886 "bdev_retry_count": 3, 00:25:46.886 "transport_ack_timeout": 0, 00:25:46.886 "ctrlr_loss_timeout_sec": 0, 00:25:46.886 "reconnect_delay_sec": 0, 00:25:46.886 "fast_io_fail_timeout_sec": 0, 00:25:46.886 "disable_auto_failback": false, 00:25:46.886 "generate_uuids": false, 00:25:46.886 "transport_tos": 0, 00:25:46.886 "nvme_error_stat": false, 00:25:46.886 "rdma_srq_size": 0, 00:25:46.886 "io_path_stat": false, 00:25:46.886 "allow_accel_sequence": false, 00:25:46.886 "rdma_max_cq_size": 0, 00:25:46.886 "rdma_cm_event_timeout_ms": 0, 00:25:46.886 "dhchap_digests": [ 00:25:46.886 "sha256", 00:25:46.886 "sha384", 00:25:46.886 "sha512" 00:25:46.886 ], 00:25:46.886 "dhchap_dhgroups": [ 00:25:46.886 "null", 00:25:46.886 "ffdhe2048", 00:25:46.886 "ffdhe3072", 00:25:46.886 "ffdhe4096", 00:25:46.886 "ffdhe6144", 00:25:46.886 "ffdhe8192" 00:25:46.886 ] 00:25:46.886 } 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "method": "bdev_nvme_attach_controller", 00:25:46.886 "params": { 00:25:46.886 "name": "nvme0", 00:25:46.886 "trtype": "TCP", 00:25:46.886 "adrfam": "IPv4", 00:25:46.886 "traddr": "10.0.0.2", 00:25:46.886 "trsvcid": "4420", 00:25:46.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.886 "prchk_reftag": false, 00:25:46.886 "prchk_guard": false, 00:25:46.886 "ctrlr_loss_timeout_sec": 0, 00:25:46.886 "reconnect_delay_sec": 0, 00:25:46.886 "fast_io_fail_timeout_sec": 0, 00:25:46.886 "psk": "key0", 00:25:46.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.886 "hdgst": false, 00:25:46.886 "ddgst": false 00:25:46.886 } 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "method": "bdev_nvme_set_hotplug", 00:25:46.886 "params": { 00:25:46.886 "period_us": 100000, 00:25:46.886 "enable": false 00:25:46.886 } 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "method": "bdev_enable_histogram", 00:25:46.886 "params": { 00:25:46.886 "name": "nvme0n1", 00:25:46.886 "enable": true 00:25:46.886 } 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "method": "bdev_wait_for_examine" 00:25:46.886 } 00:25:46.886 ] 00:25:46.886 }, 00:25:46.886 { 00:25:46.886 "subsystem": "nbd", 00:25:46.886 "config": [] 00:25:46.886 } 00:25:46.886 ] 00:25:46.886 }' 00:25:46.886 [2024-07-22 19:30:05.732470] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:46.886 [2024-07-22 19:30:05.732580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2973184 ] 00:25:46.886 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.147 [2024-07-22 19:30:05.855119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.147 [2024-07-22 19:30:05.989866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.407 [2024-07-22 19:30:06.237888] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.669 19:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.669 19:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:47.669 19:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:47.669 19:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:25:47.669 19:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.669 19:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:47.930 Running I/O for 1 seconds... 00:25:48.875 00:25:48.876 Latency(us) 00:25:48.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.876 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:48.876 Verification LBA range: start 0x0 length 0x2000 00:25:48.876 nvme0n1 : 1.03 3174.57 12.40 0.00 0.00 39731.81 5352.11 48715.09 00:25:48.876 =================================================================================================================== 00:25:48.876 Total : 3174.57 12.40 0.00 0.00 39731.81 5352.11 48715.09 00:25:48.876 0 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:48.876 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:48.876 nvmf_trace.0 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2973184 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2973184 ']' 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2973184 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2973184 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2973184' 00:25:49.137 killing process with pid 2973184 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2973184 00:25:49.137 Received shutdown signal, test time was about 1.000000 seconds 00:25:49.137 00:25:49.137 Latency(us) 00:25:49.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.137 =================================================================================================================== 00:25:49.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.137 19:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2973184 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.709 rmmod nvme_tcp 00:25:49.709 rmmod nvme_fabrics 00:25:49.709 rmmod nvme_keyring 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2972955 ']' 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2972955 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2972955 ']' 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2972955 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2972955 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2972955' 00:25:49.709 killing process with pid 2972955 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2972955 00:25:49.709 19:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2972955 00:25:50.651 19:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.651 19:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.651 19:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.651 19:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.651 19:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.652 19:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.652 19:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.652 19:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.565 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.565 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Ai0HM4rUy4 /tmp/tmp.QGfVgDVYtd /tmp/tmp.IMCPSUYoQS 00:25:52.828 00:25:52.828 real 1m34.837s 00:25:52.828 user 2m24.926s 00:25:52.828 sys 0m27.691s 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:52.828 ************************************ 00:25:52.828 END TEST nvmf_tls 00:25:52.828 ************************************ 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:52.828 ************************************ 00:25:52.828 START TEST nvmf_fips 00:25:52.828 ************************************ 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:52.828 * Looking for test storage... 00:25:52.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:52.828 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:52.829 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:53.090 Error setting digest 00:25:53.090 0052312F9E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:53.090 0052312F9E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:53.090 19:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:59.682 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:59.682 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:59.682 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:59.682 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.682 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.683 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.683 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.683 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.683 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.683 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.683 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:25:59.944 00:25:59.944 --- 10.0.0.2 ping statistics --- 00:25:59.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.944 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:25:59.944 00:25:59.944 --- 10.0.0.1 ping statistics --- 00:25:59.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.944 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2978352 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2978352 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2978352 ']' 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:59.944 19:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:59.944 [2024-07-22 19:30:18.826330] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:59.944 [2024-07-22 19:30:18.826474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.206 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.206 [2024-07-22 19:30:18.977051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.467 [2024-07-22 19:30:19.205474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.467 [2024-07-22 19:30:19.205539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.467 [2024-07-22 19:30:19.205554] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.467 [2024-07-22 19:30:19.205565] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.467 [2024-07-22 19:30:19.205576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.467 [2024-07-22 19:30:19.205620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:00.728 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:00.989 [2024-07-22 19:30:19.735599] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.989 [2024-07-22 19:30:19.751585] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:00.989 [2024-07-22 19:30:19.751905] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.989 [2024-07-22 19:30:19.810387] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:00.989 malloc0 00:26:00.989 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:00.989 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2978701 00:26:00.989 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2978701 /var/tmp/bdevperf.sock 00:26:00.990 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:00.990 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2978701 ']' 00:26:00.990 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.990 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.990 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.990 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.990 19:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:01.251 [2024-07-22 19:30:19.959424] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:01.251 [2024-07-22 19:30:19.959564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2978701 ] 00:26:01.251 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.251 [2024-07-22 19:30:20.073301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.511 [2024-07-22 19:30:20.213160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.772 19:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.772 19:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:01.772 19:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:02.033 [2024-07-22 19:30:20.788060] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:02.033 [2024-07-22 19:30:20.788159] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:02.033 TLSTESTn1 00:26:02.033 19:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:02.033 Running I/O for 10 seconds... 00:26:14.269 00:26:14.269 Latency(us) 00:26:14.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.269 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:14.269 Verification LBA range: start 0x0 length 0x2000 00:26:14.269 TLSTESTn1 : 10.05 4258.73 16.64 0.00 0.00 29962.57 6990.51 64225.28 00:26:14.269 =================================================================================================================== 00:26:14.269 Total : 4258.73 16.64 0.00 0.00 29962.57 6990.51 64225.28 00:26:14.269 0 00:26:14.269 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:14.269 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:14.269 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:14.270 nvmf_trace.0 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2978701 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2978701 ']' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2978701 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2978701 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2978701' 00:26:14.270 killing process with pid 2978701 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2978701 00:26:14.270 Received shutdown signal, test time was about 10.000000 seconds 00:26:14.270 00:26:14.270 Latency(us) 00:26:14.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.270 =================================================================================================================== 00:26:14.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.270 [2024-07-22 19:30:31.228894] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2978701 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:14.270 rmmod nvme_tcp 00:26:14.270 rmmod nvme_fabrics 00:26:14.270 rmmod nvme_keyring 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2978352 ']' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2978352 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2978352 ']' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2978352 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2978352 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2978352' 00:26:14.270 killing process with pid 2978352 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2978352 00:26:14.270 [2024-07-22 19:30:31.866103] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:14.270 19:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2978352 00:26:14.270 19:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:14.270 19:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:14.270 19:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:14.270 19:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:14.270 19:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:14.270 19:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.270 19:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.270 19:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:16.196 00:26:16.196 real 0m23.040s 00:26:16.196 user 0m24.810s 00:26:16.196 sys 0m9.275s 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:16.196 ************************************ 00:26:16.196 END TEST nvmf_fips 00:26:16.196 ************************************ 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:16.196 ************************************ 00:26:16.196 START TEST nvmf_fuzz 00:26:16.196 ************************************ 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:16.196 * Looking for test storage... 00:26:16.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:16.196 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:26:16.197 19:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.869 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:22.870 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:22.870 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:22.870 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:22.870 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:22.870 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.132 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.132 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.132 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:23.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:26:23.132 00:26:23.132 --- 10.0.0.2 ping statistics --- 00:26:23.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.132 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:26:23.132 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:26:23.132 00:26:23.132 --- 10.0.0.1 ping statistics --- 00:26:23.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.133 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2985060 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2985060 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2985060 ']' 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:23.133 19:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.076 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:24.076 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:26:24.076 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:24.076 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.076 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.076 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.076 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:24.076 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.076 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.077 Malloc0 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:24.077 19:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:56.198 Fuzzing completed. Shutting down the fuzz application 00:26:56.198 00:26:56.198 Dumping successful admin opcodes: 00:26:56.198 8, 9, 10, 24, 00:26:56.198 Dumping successful io opcodes: 00:26:56.198 0, 9, 00:26:56.198 NS: 0x200003aefec0 I/O qp, Total commands completed: 801284, total successful commands: 4663, random_seed: 3299408704 00:26:56.198 NS: 0x200003aefec0 admin qp, Total commands completed: 99933, total successful commands: 817, random_seed: 54311296 00:26:56.198 19:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:56.198 Fuzzing completed. Shutting down the fuzz application 00:26:56.198 00:26:56.198 Dumping successful admin opcodes: 00:26:56.198 24, 00:26:56.198 Dumping successful io opcodes: 00:26:56.198 00:26:56.198 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2133553695 00:26:56.198 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2133654619 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.198 rmmod nvme_tcp 00:26:56.198 rmmod nvme_fabrics 00:26:56.198 rmmod nvme_keyring 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2985060 ']' 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2985060 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2985060 ']' 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 2985060 00:26:56.198 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:26:56.460 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:56.460 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2985060 00:26:56.460 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:56.460 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:56.460 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2985060' 00:26:56.460 killing process with pid 2985060 00:26:56.460 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 2985060 00:26:56.460 19:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 2985060 00:26:57.403 19:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:57.403 19:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:57.403 19:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:57.403 19:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:57.403 19:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:57.403 19:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.403 19:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.403 19:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.319 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:59.319 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:59.580 00:26:59.580 real 0m43.565s 00:26:59.580 user 0m58.616s 00:26:59.580 sys 0m14.669s 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:59.580 ************************************ 00:26:59.580 END TEST nvmf_fuzz 00:26:59.580 ************************************ 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:59.580 ************************************ 00:26:59.580 START TEST nvmf_multiconnection 00:26:59.580 ************************************ 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:59.580 * Looking for test storage... 00:26:59.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:59.580 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.581 19:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:07.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:07.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:07.724 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:07.724 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.724 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:07.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:27:07.725 00:27:07.725 --- 10.0.0.2 ping statistics --- 00:27:07.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.725 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:27:07.725 00:27:07.725 --- 10.0.0.1 ping statistics --- 00:27:07.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.725 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2995724 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2995724 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 2995724 ']' 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.725 19:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.725 [2024-07-22 19:31:25.649661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:07.725 [2024-07-22 19:31:25.649785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.725 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.725 [2024-07-22 19:31:25.799498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.725 [2024-07-22 19:31:25.986034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.725 [2024-07-22 19:31:25.986077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.725 [2024-07-22 19:31:25.986090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.725 [2024-07-22 19:31:25.986100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.725 [2024-07-22 19:31:25.986110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.725 [2024-07-22 19:31:25.986275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.725 [2024-07-22 19:31:25.986335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.725 [2024-07-22 19:31:25.986476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.725 [2024-07-22 19:31:25.986501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.725 [2024-07-22 19:31:26.434802] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.725 Malloc1 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.725 [2024-07-22 19:31:26.539648] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.725 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.726 Malloc2 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.726 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 Malloc3 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 Malloc4 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 Malloc5 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.987 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.248 Malloc6 00:27:08.248 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.248 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:08.248 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.248 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.248 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.248 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:08.248 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.248 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:08.249 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 Malloc7 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 Malloc8 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.249 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 Malloc9 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 Malloc10 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 Malloc11 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.511 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.772 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.772 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:08.772 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.772 19:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:10.158 19:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:10.158 19:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:10.158 19:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:10.158 19:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:10.158 19:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:12.096 19:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:12.358 19:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:12.358 19:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:27:12.358 19:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:12.358 19:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:12.358 19:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:12.358 19:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:12.358 19:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:13.744 19:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:13.744 19:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:13.744 19:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:13.744 19:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:13.744 19:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:15.657 19:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:15.657 19:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:15.657 19:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:27:15.657 19:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:15.657 19:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:15.657 19:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:15.657 19:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:15.657 19:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:17.638 19:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:17.638 19:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:17.638 19:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:17.638 19:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:17.638 19:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:19.555 19:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:19.555 19:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:19.555 19:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:27:19.555 19:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:19.555 19:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:19.555 19:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:19.555 19:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.555 19:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:21.468 19:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:21.468 19:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:21.468 19:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:21.468 19:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:21.468 19:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:23.382 19:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:23.382 19:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:23.382 19:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:27:23.382 19:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:23.382 19:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:23.382 19:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:23.382 19:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:23.382 19:31:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:25.295 19:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:25.295 19:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:25.295 19:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:25.295 19:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:25.295 19:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:27.210 19:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:27.210 19:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:27.210 19:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:27:27.210 19:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:27.210 19:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:27.210 19:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:27.210 19:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.210 19:31:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:28.596 19:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:28.596 19:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:28.596 19:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:28.596 19:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:28.596 19:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:31.142 19:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:31.142 19:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:31.142 19:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:27:31.142 19:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:31.142 19:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:31.142 19:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:31.142 19:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.142 19:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:32.527 19:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:32.527 19:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:32.527 19:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:32.527 19:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:32.527 19:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:34.438 19:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:34.438 19:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:34.438 19:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:27:34.438 19:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:34.438 19:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:34.438 19:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:34.438 19:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.438 19:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:36.349 19:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:36.349 19:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:36.349 19:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:36.349 19:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:36.349 19:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:38.260 19:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:38.260 19:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:38.260 19:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:27:38.260 19:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:38.260 19:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:38.260 19:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:38.260 19:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.260 19:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:40.171 19:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:40.171 19:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:40.171 19:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:40.171 19:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:40.171 19:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:42.083 19:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:42.083 19:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:42.083 19:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:27:42.083 19:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:42.083 19:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:42.083 19:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:42.083 19:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:42.083 19:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:43.995 19:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:43.995 19:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:43.995 19:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:43.995 19:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:43.995 19:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:45.999 19:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:45.999 19:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:45.999 19:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:45.999 19:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:45.999 19:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:45.999 19:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:45.999 19:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:45.999 19:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:47.911 19:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:47.912 19:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:47.912 19:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:47.912 19:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:47.912 19:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:49.838 19:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:49.838 19:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:49.838 19:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:49.838 19:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:49.838 19:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:49.838 19:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:49.838 19:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:49.838 [global] 00:27:49.838 thread=1 00:27:49.838 invalidate=1 00:27:49.838 rw=read 00:27:49.838 time_based=1 00:27:49.838 runtime=10 00:27:49.838 ioengine=libaio 00:27:49.838 direct=1 00:27:49.838 bs=262144 00:27:49.838 iodepth=64 00:27:49.838 norandommap=1 00:27:49.838 numjobs=1 00:27:49.838 00:27:49.838 [job0] 00:27:49.838 filename=/dev/nvme0n1 00:27:49.838 [job1] 00:27:49.838 filename=/dev/nvme10n1 00:27:49.838 [job2] 00:27:49.838 filename=/dev/nvme1n1 00:27:49.838 [job3] 00:27:49.838 filename=/dev/nvme2n1 00:27:49.838 [job4] 00:27:49.838 filename=/dev/nvme3n1 00:27:49.838 [job5] 00:27:49.838 filename=/dev/nvme4n1 00:27:49.838 [job6] 00:27:49.838 filename=/dev/nvme5n1 00:27:49.838 [job7] 00:27:49.838 filename=/dev/nvme6n1 00:27:49.838 [job8] 00:27:49.838 filename=/dev/nvme7n1 00:27:49.838 [job9] 00:27:49.838 filename=/dev/nvme8n1 00:27:49.838 [job10] 00:27:49.838 filename=/dev/nvme9n1 00:27:50.099 Could not set queue depth (nvme0n1) 00:27:50.099 Could not set queue depth (nvme10n1) 00:27:50.099 Could not set queue depth (nvme1n1) 00:27:50.099 Could not set queue depth (nvme2n1) 00:27:50.099 Could not set queue depth (nvme3n1) 00:27:50.099 Could not set queue depth (nvme4n1) 00:27:50.099 Could not set queue depth (nvme5n1) 00:27:50.099 Could not set queue depth (nvme6n1) 00:27:50.099 Could not set queue depth (nvme7n1) 00:27:50.099 Could not set queue depth (nvme8n1) 00:27:50.099 Could not set queue depth (nvme9n1) 00:27:50.360 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:50.360 fio-3.35 00:27:50.360 Starting 11 threads 00:28:02.594 00:28:02.594 job0: (groupid=0, jobs=1): err= 0: pid=3004479: Mon Jul 22 19:32:19 2024 00:28:02.594 read: IOPS=859, BW=215MiB/s (225MB/s)(2174MiB/10117msec) 00:28:02.594 slat (usec): min=6, max=80783, avg=833.41, stdev=3168.96 00:28:02.594 clat (usec): min=1984, max=246757, avg=73522.74, stdev=33865.31 00:28:02.594 lat (msec): min=2, max=246, avg=74.36, stdev=34.30 00:28:02.594 clat percentiles (msec): 00:28:02.594 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 29], 20.00th=[ 42], 00:28:02.594 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 80], 00:28:02.594 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 117], 95.00th=[ 134], 00:28:02.594 | 99.00th=[ 159], 99.50th=[ 182], 99.90th=[ 241], 99.95th=[ 247], 00:28:02.594 | 99.99th=[ 247] 00:28:02.594 bw ( KiB/s): min=104448, max=392704, per=9.67%, avg=220953.60, stdev=66242.61, samples=20 00:28:02.594 iops : min= 408, max= 1534, avg=863.10, stdev=258.76, samples=20 00:28:02.594 lat (msec) : 2=0.01%, 4=0.49%, 10=1.05%, 20=4.45%, 50=16.68% 00:28:02.594 lat (msec) : 100=58.49%, 250=18.83% 00:28:02.594 cpu : usr=0.34%, sys=2.70%, ctx=2164, majf=0, minf=3534 00:28:02.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:02.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.594 issued rwts: total=8695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.594 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.594 job1: (groupid=0, jobs=1): err= 0: pid=3004482: Mon Jul 22 19:32:19 2024 00:28:02.594 read: IOPS=799, BW=200MiB/s (210MB/s)(2021MiB/10116msec) 00:28:02.594 slat (usec): min=5, max=103345, avg=1020.22, stdev=3261.24 00:28:02.594 clat (usec): min=1851, max=252847, avg=78931.68, stdev=31189.08 00:28:02.594 lat (usec): min=1916, max=252888, avg=79951.91, stdev=31591.69 00:28:02.594 clat percentiles (msec): 00:28:02.594 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 34], 20.00th=[ 59], 00:28:02.594 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 85], 00:28:02.594 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 113], 95.00th=[ 136], 00:28:02.594 | 99.00th=[ 159], 99.50th=[ 178], 99.90th=[ 249], 99.95th=[ 249], 00:28:02.594 | 99.99th=[ 253] 00:28:02.594 bw ( KiB/s): min=104960, max=390412, per=8.98%, avg=205376.60, stdev=60681.00, samples=20 00:28:02.594 iops : min= 410, max= 1525, avg=802.25, stdev=237.03, samples=20 00:28:02.594 lat (msec) : 2=0.05%, 4=0.61%, 10=1.11%, 20=1.09%, 50=13.28% 00:28:02.594 lat (msec) : 100=64.72%, 250=19.12%, 500=0.01% 00:28:02.594 cpu : usr=0.34%, sys=2.54%, ctx=1973, majf=0, minf=4097 00:28:02.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:02.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.594 issued rwts: total=8085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.594 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.594 job2: (groupid=0, jobs=1): err= 0: pid=3004483: Mon Jul 22 19:32:19 2024 00:28:02.594 read: IOPS=765, BW=191MiB/s (201MB/s)(1931MiB/10086msec) 00:28:02.594 slat (usec): min=7, max=33761, avg=1052.99, stdev=2836.06 00:28:02.594 clat (msec): min=10, max=180, avg=82.38, stdev=22.08 00:28:02.594 lat (msec): min=10, max=180, avg=83.43, stdev=22.32 00:28:02.594 clat percentiles (msec): 00:28:02.594 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 64], 00:28:02.594 | 30.00th=[ 69], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 89], 00:28:02.594 | 70.00th=[ 94], 80.00th=[ 102], 90.00th=[ 112], 95.00th=[ 120], 00:28:02.594 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 161], 00:28:02.594 | 99.99th=[ 182] 00:28:02.594 bw ( KiB/s): min=138240, max=281600, per=8.58%, avg=196096.00, stdev=45747.05, samples=20 00:28:02.594 iops : min= 540, max= 1100, avg=766.00, stdev=178.70, samples=20 00:28:02.594 lat (msec) : 20=0.28%, 50=7.43%, 100=71.38%, 250=20.90% 00:28:02.594 cpu : usr=0.33%, sys=2.63%, ctx=1902, majf=0, minf=4097 00:28:02.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:02.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.594 issued rwts: total=7723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.594 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.594 job3: (groupid=0, jobs=1): err= 0: pid=3004484: Mon Jul 22 19:32:19 2024 00:28:02.594 read: IOPS=785, BW=196MiB/s (206MB/s)(1977MiB/10072msec) 00:28:02.594 slat (usec): min=8, max=46967, avg=978.39, stdev=2885.39 00:28:02.594 clat (msec): min=2, max=157, avg=80.42, stdev=21.89 00:28:02.594 lat (msec): min=2, max=157, avg=81.40, stdev=22.21 00:28:02.594 clat percentiles (msec): 00:28:02.594 | 1.00th=[ 15], 5.00th=[ 38], 10.00th=[ 58], 20.00th=[ 66], 00:28:02.594 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 86], 00:28:02.594 | 70.00th=[ 92], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 114], 00:28:02.594 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 155], 99.95th=[ 155], 00:28:02.594 | 99.99th=[ 159] 00:28:02.594 bw ( KiB/s): min=153600, max=239616, per=8.79%, avg=200878.30, stdev=29099.43, samples=20 00:28:02.594 iops : min= 600, max= 936, avg=784.65, stdev=113.66, samples=20 00:28:02.594 lat (msec) : 4=0.11%, 10=0.40%, 20=1.09%, 50=5.17%, 100=76.43% 00:28:02.594 lat (msec) : 250=16.79% 00:28:02.594 cpu : usr=0.34%, sys=2.81%, ctx=1987, majf=0, minf=4097 00:28:02.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:02.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.594 issued rwts: total=7909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.594 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.594 job4: (groupid=0, jobs=1): err= 0: pid=3004485: Mon Jul 22 19:32:19 2024 00:28:02.594 read: IOPS=819, BW=205MiB/s (215MB/s)(2071MiB/10111msec) 00:28:02.594 slat (usec): min=6, max=69193, avg=1064.55, stdev=3324.85 00:28:02.594 clat (msec): min=5, max=254, avg=76.95, stdev=28.11 00:28:02.594 lat (msec): min=7, max=254, avg=78.01, stdev=28.57 00:28:02.594 clat percentiles (msec): 00:28:02.594 | 1.00th=[ 21], 5.00th=[ 42], 10.00th=[ 49], 20.00th=[ 55], 00:28:02.595 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 73], 60.00th=[ 81], 00:28:02.595 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 114], 95.00th=[ 131], 00:28:02.595 | 99.00th=[ 157], 99.50th=[ 176], 99.90th=[ 236], 99.95th=[ 249], 00:28:02.595 | 99.99th=[ 255] 00:28:02.595 bw ( KiB/s): min=104448, max=301056, per=9.21%, avg=210457.60, stdev=56336.04, samples=20 00:28:02.595 iops : min= 408, max= 1176, avg=822.10, stdev=220.06, samples=20 00:28:02.595 lat (msec) : 10=0.10%, 20=0.75%, 50=11.20%, 100=71.92%, 250=16.02% 00:28:02.595 lat (msec) : 500=0.01% 00:28:02.595 cpu : usr=0.28%, sys=2.49%, ctx=1879, majf=0, minf=4097 00:28:02.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:02.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.595 issued rwts: total=8284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.595 job5: (groupid=0, jobs=1): err= 0: pid=3004486: Mon Jul 22 19:32:19 2024 00:28:02.595 read: IOPS=962, BW=241MiB/s (252MB/s)(2410MiB/10013msec) 00:28:02.595 slat (usec): min=6, max=35161, avg=913.60, stdev=2465.99 00:28:02.595 clat (msec): min=11, max=136, avg=65.46, stdev=25.54 00:28:02.595 lat (msec): min=13, max=143, avg=66.38, stdev=25.81 00:28:02.595 clat percentiles (msec): 00:28:02.595 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:28:02.595 | 30.00th=[ 50], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 75], 00:28:02.595 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 96], 95.00th=[ 103], 00:28:02.595 | 99.00th=[ 115], 99.50th=[ 129], 99.90th=[ 134], 99.95th=[ 136], 00:28:02.595 | 99.99th=[ 136] 00:28:02.595 bw ( KiB/s): min=156160, max=526848, per=10.73%, avg=245168.40, stdev=109868.65, samples=20 00:28:02.595 iops : min= 610, max= 2058, avg=957.65, stdev=429.18, samples=20 00:28:02.595 lat (msec) : 20=0.35%, 50=29.84%, 100=62.98%, 250=6.83% 00:28:02.595 cpu : usr=0.43%, sys=3.05%, ctx=2122, majf=0, minf=4097 00:28:02.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:28:02.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.595 issued rwts: total=9639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.595 job6: (groupid=0, jobs=1): err= 0: pid=3004487: Mon Jul 22 19:32:19 2024 00:28:02.595 read: IOPS=709, BW=177MiB/s (186MB/s)(1787MiB/10081msec) 00:28:02.595 slat (usec): min=8, max=35412, avg=1324.91, stdev=3250.63 00:28:02.595 clat (msec): min=16, max=182, avg=88.84, stdev=18.48 00:28:02.595 lat (msec): min=16, max=182, avg=90.16, stdev=18.83 00:28:02.595 clat percentiles (msec): 00:28:02.595 | 1.00th=[ 40], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 74], 00:28:02.595 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 90], 60.00th=[ 94], 00:28:02.595 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 111], 95.00th=[ 117], 00:28:02.595 | 99.00th=[ 131], 99.50th=[ 140], 99.90th=[ 161], 99.95th=[ 171], 00:28:02.595 | 99.99th=[ 182] 00:28:02.595 bw ( KiB/s): min=142848, max=214528, per=7.94%, avg=181401.60, stdev=23962.07, samples=20 00:28:02.595 iops : min= 558, max= 838, avg=708.60, stdev=93.60, samples=20 00:28:02.595 lat (msec) : 20=0.04%, 50=2.07%, 100=71.65%, 250=26.24% 00:28:02.595 cpu : usr=0.29%, sys=2.70%, ctx=1675, majf=0, minf=4097 00:28:02.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:28:02.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.595 issued rwts: total=7149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.595 job7: (groupid=0, jobs=1): err= 0: pid=3004488: Mon Jul 22 19:32:19 2024 00:28:02.595 read: IOPS=723, BW=181MiB/s (190MB/s)(1826MiB/10100msec) 00:28:02.595 slat (usec): min=8, max=52709, avg=1335.98, stdev=3483.91 00:28:02.595 clat (msec): min=3, max=242, avg=87.06, stdev=27.46 00:28:02.595 lat (msec): min=3, max=242, avg=88.40, stdev=27.92 00:28:02.595 clat percentiles (msec): 00:28:02.595 | 1.00th=[ 44], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 61], 00:28:02.595 | 30.00th=[ 67], 40.00th=[ 80], 50.00th=[ 89], 60.00th=[ 94], 00:28:02.595 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 124], 95.00th=[ 136], 00:28:02.595 | 99.00th=[ 155], 99.50th=[ 176], 99.90th=[ 228], 99.95th=[ 230], 00:28:02.595 | 99.99th=[ 243] 00:28:02.595 bw ( KiB/s): min=110592, max=274944, per=8.11%, avg=185381.95, stdev=53676.70, samples=20 00:28:02.595 iops : min= 432, max= 1074, avg=724.10, stdev=209.73, samples=20 00:28:02.595 lat (msec) : 4=0.01%, 10=0.01%, 20=0.36%, 50=3.75%, 100=67.89% 00:28:02.595 lat (msec) : 250=27.98% 00:28:02.595 cpu : usr=0.46%, sys=2.59%, ctx=1668, majf=0, minf=4097 00:28:02.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:28:02.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.595 issued rwts: total=7305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.595 job8: (groupid=0, jobs=1): err= 0: pid=3004489: Mon Jul 22 19:32:19 2024 00:28:02.595 read: IOPS=1173, BW=293MiB/s (308MB/s)(2939MiB/10014msec) 00:28:02.595 slat (usec): min=8, max=91343, avg=829.80, stdev=2278.01 00:28:02.595 clat (msec): min=2, max=203, avg=53.65, stdev=24.23 00:28:02.595 lat (msec): min=2, max=211, avg=54.48, stdev=24.56 00:28:02.595 clat percentiles (msec): 00:28:02.595 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 33], 00:28:02.595 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 48], 60.00th=[ 58], 00:28:02.595 | 70.00th=[ 66], 80.00th=[ 74], 90.00th=[ 89], 95.00th=[ 96], 00:28:02.595 | 99.00th=[ 110], 99.50th=[ 125], 99.90th=[ 190], 99.95th=[ 203], 00:28:02.595 | 99.99th=[ 203] 00:28:02.595 bw ( KiB/s): min=155648, max=521216, per=13.10%, avg=299340.80, stdev=106295.43, samples=20 00:28:02.595 iops : min= 608, max= 2036, avg=1169.30, stdev=415.22, samples=20 00:28:02.595 lat (msec) : 4=0.48%, 10=0.19%, 20=0.44%, 50=50.80%, 100=45.15% 00:28:02.595 lat (msec) : 250=2.93% 00:28:02.595 cpu : usr=0.52%, sys=3.75%, ctx=2386, majf=0, minf=4097 00:28:02.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:28:02.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.595 issued rwts: total=11756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.595 job9: (groupid=0, jobs=1): err= 0: pid=3004490: Mon Jul 22 19:32:19 2024 00:28:02.595 read: IOPS=684, BW=171MiB/s (180MB/s)(1726MiB/10084msec) 00:28:02.595 slat (usec): min=7, max=39762, avg=1446.81, stdev=3502.30 00:28:02.595 clat (msec): min=13, max=192, avg=91.89, stdev=19.59 00:28:02.595 lat (msec): min=15, max=192, avg=93.33, stdev=19.81 00:28:02.595 clat percentiles (msec): 00:28:02.595 | 1.00th=[ 48], 5.00th=[ 54], 10.00th=[ 67], 20.00th=[ 79], 00:28:02.595 | 30.00th=[ 85], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 97], 00:28:02.595 | 70.00th=[ 102], 80.00th=[ 107], 90.00th=[ 114], 95.00th=[ 122], 00:28:02.595 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 182], 99.95th=[ 188], 00:28:02.595 | 99.99th=[ 192] 00:28:02.595 bw ( KiB/s): min=138752, max=249856, per=7.66%, avg=175111.80, stdev=28669.39, samples=20 00:28:02.595 iops : min= 542, max= 976, avg=684.00, stdev=111.99, samples=20 00:28:02.595 lat (msec) : 20=0.14%, 50=1.69%, 100=65.72%, 250=32.44% 00:28:02.595 cpu : usr=0.34%, sys=2.47%, ctx=1429, majf=0, minf=4097 00:28:02.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:02.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.595 issued rwts: total=6905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.595 job10: (groupid=0, jobs=1): err= 0: pid=3004491: Mon Jul 22 19:32:19 2024 00:28:02.595 read: IOPS=680, BW=170MiB/s (178MB/s)(1721MiB/10113msec) 00:28:02.595 slat (usec): min=9, max=58640, avg=1453.34, stdev=3633.46 00:28:02.595 clat (msec): min=44, max=269, avg=92.47, stdev=24.98 00:28:02.595 lat (msec): min=44, max=269, avg=93.92, stdev=25.37 00:28:02.595 clat percentiles (msec): 00:28:02.595 | 1.00th=[ 52], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 70], 00:28:02.595 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 97], 00:28:02.595 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 128], 95.00th=[ 138], 00:28:02.595 | 99.00th=[ 153], 99.50th=[ 178], 99.90th=[ 224], 99.95th=[ 234], 00:28:02.595 | 99.99th=[ 271] 00:28:02.595 bw ( KiB/s): min=114688, max=240128, per=7.64%, avg=174566.40, stdev=41837.99, samples=20 00:28:02.595 iops : min= 448, max= 938, avg=681.90, stdev=163.43, samples=20 00:28:02.595 lat (msec) : 50=0.71%, 100=65.70%, 250=33.56%, 500=0.03% 00:28:02.595 cpu : usr=0.25%, sys=2.62%, ctx=1550, majf=0, minf=4097 00:28:02.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:02.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:02.595 issued rwts: total=6883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:02.595 00:28:02.595 Run status group 0 (all jobs): 00:28:02.595 READ: bw=2232MiB/s (2341MB/s), 170MiB/s-293MiB/s (178MB/s-308MB/s), io=22.1GiB (23.7GB), run=10013-10117msec 00:28:02.595 00:28:02.595 Disk stats (read/write): 00:28:02.596 nvme0n1: ios=17344/0, merge=0/0, ticks=1254138/0, in_queue=1254138, util=96.56% 00:28:02.596 nvme10n1: ios=16134/0, merge=0/0, ticks=1251282/0, in_queue=1251282, util=96.83% 00:28:02.596 nvme1n1: ios=15146/0, merge=0/0, ticks=1221732/0, in_queue=1221732, util=97.10% 00:28:02.596 nvme2n1: ios=15509/0, merge=0/0, ticks=1224295/0, in_queue=1224295, util=97.29% 00:28:02.596 nvme3n1: ios=16535/0, merge=0/0, ticks=1250098/0, in_queue=1250098, util=97.43% 00:28:02.596 nvme4n1: ios=18569/0, merge=0/0, ticks=1225130/0, in_queue=1225130, util=97.87% 00:28:02.596 nvme5n1: ios=13990/0, merge=0/0, ticks=1217006/0, in_queue=1217006, util=98.01% 00:28:02.596 nvme6n1: ios=14351/0, merge=0/0, ticks=1211565/0, in_queue=1211565, util=98.18% 00:28:02.596 nvme7n1: ios=22742/0, merge=0/0, ticks=1224857/0, in_queue=1224857, util=98.71% 00:28:02.596 nvme8n1: ios=13517/0, merge=0/0, ticks=1216706/0, in_queue=1216706, util=99.03% 00:28:02.596 nvme9n1: ios=13707/0, merge=0/0, ticks=1241801/0, in_queue=1241801, util=99.19% 00:28:02.596 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:02.596 [global] 00:28:02.596 thread=1 00:28:02.596 invalidate=1 00:28:02.596 rw=randwrite 00:28:02.596 time_based=1 00:28:02.596 runtime=10 00:28:02.596 ioengine=libaio 00:28:02.596 direct=1 00:28:02.596 bs=262144 00:28:02.596 iodepth=64 00:28:02.596 norandommap=1 00:28:02.596 numjobs=1 00:28:02.596 00:28:02.596 [job0] 00:28:02.596 filename=/dev/nvme0n1 00:28:02.596 [job1] 00:28:02.596 filename=/dev/nvme10n1 00:28:02.596 [job2] 00:28:02.596 filename=/dev/nvme1n1 00:28:02.596 [job3] 00:28:02.596 filename=/dev/nvme2n1 00:28:02.596 [job4] 00:28:02.596 filename=/dev/nvme3n1 00:28:02.596 [job5] 00:28:02.596 filename=/dev/nvme4n1 00:28:02.596 [job6] 00:28:02.596 filename=/dev/nvme5n1 00:28:02.596 [job7] 00:28:02.596 filename=/dev/nvme6n1 00:28:02.596 [job8] 00:28:02.596 filename=/dev/nvme7n1 00:28:02.596 [job9] 00:28:02.596 filename=/dev/nvme8n1 00:28:02.596 [job10] 00:28:02.596 filename=/dev/nvme9n1 00:28:02.596 Could not set queue depth (nvme0n1) 00:28:02.596 Could not set queue depth (nvme10n1) 00:28:02.596 Could not set queue depth (nvme1n1) 00:28:02.596 Could not set queue depth (nvme2n1) 00:28:02.596 Could not set queue depth (nvme3n1) 00:28:02.596 Could not set queue depth (nvme4n1) 00:28:02.596 Could not set queue depth (nvme5n1) 00:28:02.596 Could not set queue depth (nvme6n1) 00:28:02.596 Could not set queue depth (nvme7n1) 00:28:02.596 Could not set queue depth (nvme8n1) 00:28:02.596 Could not set queue depth (nvme9n1) 00:28:02.596 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:02.596 fio-3.35 00:28:02.596 Starting 11 threads 00:28:12.609 00:28:12.609 job0: (groupid=0, jobs=1): err= 0: pid=3006485: Mon Jul 22 19:32:30 2024 00:28:12.609 write: IOPS=765, BW=191MiB/s (201MB/s)(1935MiB/10104msec); 0 zone resets 00:28:12.609 slat (usec): min=20, max=10480, avg=1287.52, stdev=2202.23 00:28:12.609 clat (msec): min=12, max=203, avg=82.24, stdev=12.88 00:28:12.609 lat (msec): min=12, max=203, avg=83.53, stdev=12.94 00:28:12.609 clat percentiles (msec): 00:28:12.609 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 75], 00:28:12.609 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 81], 00:28:12.609 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 103], 95.00th=[ 108], 00:28:12.609 | 99.00th=[ 115], 99.50th=[ 138], 99.90th=[ 190], 99.95th=[ 199], 00:28:12.609 | 99.99th=[ 205] 00:28:12.609 bw ( KiB/s): min=150528, max=216064, per=11.18%, avg=196505.60, stdev=22018.57, samples=20 00:28:12.609 iops : min= 588, max= 844, avg=767.60, stdev=86.01, samples=20 00:28:12.609 lat (msec) : 20=0.10%, 50=0.31%, 100=85.54%, 250=14.05% 00:28:12.609 cpu : usr=1.68%, sys=2.89%, ctx=1967, majf=0, minf=1 00:28:12.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.609 issued rwts: total=0,7739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.609 job1: (groupid=0, jobs=1): err= 0: pid=3006514: Mon Jul 22 19:32:30 2024 00:28:12.609 write: IOPS=1016, BW=254MiB/s (267MB/s)(2565MiB/10092msec); 0 zone resets 00:28:12.609 slat (usec): min=19, max=9822, avg=926.52, stdev=1700.83 00:28:12.609 clat (msec): min=12, max=179, avg=62.00, stdev=19.68 00:28:12.609 lat (msec): min=12, max=179, avg=62.92, stdev=19.90 00:28:12.609 clat percentiles (msec): 00:28:12.609 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 46], 20.00th=[ 47], 00:28:12.609 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 68], 00:28:12.609 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 95], 95.00th=[ 100], 00:28:12.609 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 163], 99.95th=[ 176], 00:28:12.609 | 99.99th=[ 180] 00:28:12.609 bw ( KiB/s): min=164864, max=350208, per=14.85%, avg=261090.35, stdev=65504.29, samples=20 00:28:12.609 iops : min= 644, max= 1368, avg=1019.85, stdev=255.90, samples=20 00:28:12.609 lat (msec) : 20=0.09%, 50=51.71%, 100=43.97%, 250=4.23% 00:28:12.609 cpu : usr=2.37%, sys=3.35%, ctx=2862, majf=0, minf=1 00:28:12.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.609 issued rwts: total=0,10261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.609 job2: (groupid=0, jobs=1): err= 0: pid=3006537: Mon Jul 22 19:32:30 2024 00:28:12.609 write: IOPS=765, BW=191MiB/s (201MB/s)(1933MiB/10099msec); 0 zone resets 00:28:12.609 slat (usec): min=25, max=10237, avg=1288.73, stdev=2210.96 00:28:12.609 clat (msec): min=12, max=204, avg=82.29, stdev=12.87 00:28:12.609 lat (msec): min=12, max=204, avg=83.58, stdev=12.92 00:28:12.609 clat percentiles (msec): 00:28:12.609 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 75], 00:28:12.609 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 80], 00:28:12.609 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 103], 95.00th=[ 108], 00:28:12.609 | 99.00th=[ 115], 99.50th=[ 140], 99.90th=[ 192], 99.95th=[ 199], 00:28:12.609 | 99.99th=[ 205] 00:28:12.609 bw ( KiB/s): min=150016, max=214528, per=11.17%, avg=196300.80, stdev=21955.06, samples=20 00:28:12.609 iops : min= 586, max= 838, avg=766.80, stdev=85.76, samples=20 00:28:12.609 lat (msec) : 20=0.10%, 50=0.31%, 100=85.20%, 250=14.38% 00:28:12.609 cpu : usr=1.94%, sys=2.32%, ctx=1964, majf=0, minf=1 00:28:12.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.609 issued rwts: total=0,7731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.609 job3: (groupid=0, jobs=1): err= 0: pid=3006549: Mon Jul 22 19:32:30 2024 00:28:12.609 write: IOPS=427, BW=107MiB/s (112MB/s)(1078MiB/10084msec); 0 zone resets 00:28:12.609 slat (usec): min=19, max=93890, avg=2287.45, stdev=4356.94 00:28:12.609 clat (msec): min=17, max=232, avg=147.41, stdev=32.23 00:28:12.609 lat (msec): min=17, max=233, avg=149.70, stdev=32.53 00:28:12.609 clat percentiles (msec): 00:28:12.609 | 1.00th=[ 44], 5.00th=[ 88], 10.00th=[ 93], 20.00th=[ 125], 00:28:12.609 | 30.00th=[ 131], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 165], 00:28:12.609 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 174], 95.00th=[ 180], 00:28:12.609 | 99.00th=[ 197], 99.50th=[ 207], 99.90th=[ 215], 99.95th=[ 234], 00:28:12.609 | 99.99th=[ 234] 00:28:12.609 bw ( KiB/s): min=92160, max=176128, per=6.19%, avg=108732.40, stdev=24222.99, samples=20 00:28:12.609 iops : min= 360, max= 688, avg=424.70, stdev=94.65, samples=20 00:28:12.609 lat (msec) : 20=0.09%, 50=1.11%, 100=12.27%, 250=86.52% 00:28:12.609 cpu : usr=1.17%, sys=1.24%, ctx=1204, majf=0, minf=1 00:28:12.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:28:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.609 issued rwts: total=0,4310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.609 job4: (groupid=0, jobs=1): err= 0: pid=3006555: Mon Jul 22 19:32:30 2024 00:28:12.609 write: IOPS=409, BW=102MiB/s (107MB/s)(1032MiB/10087msec); 0 zone resets 00:28:12.609 slat (usec): min=21, max=88570, avg=2407.37, stdev=4638.24 00:28:12.609 clat (msec): min=24, max=235, avg=153.85, stdev=29.34 00:28:12.609 lat (msec): min=24, max=235, avg=156.25, stdev=29.52 00:28:12.609 clat percentiles (msec): 00:28:12.609 | 1.00th=[ 89], 5.00th=[ 95], 10.00th=[ 115], 20.00th=[ 127], 00:28:12.609 | 30.00th=[ 132], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 171], 00:28:12.609 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 184], 95.00th=[ 186], 00:28:12.610 | 99.00th=[ 197], 99.50th=[ 203], 99.90th=[ 220], 99.95th=[ 236], 00:28:12.610 | 99.99th=[ 236] 00:28:12.610 bw ( KiB/s): min=77824, max=169472, per=5.92%, avg=104089.60, stdev=21661.32, samples=20 00:28:12.610 iops : min= 304, max= 662, avg=406.60, stdev=84.61, samples=20 00:28:12.610 lat (msec) : 50=0.19%, 100=8.60%, 250=91.21% 00:28:12.610 cpu : usr=1.12%, sys=1.22%, ctx=1101, majf=0, minf=1 00:28:12.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:12.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.610 issued rwts: total=0,4129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.610 job5: (groupid=0, jobs=1): err= 0: pid=3006579: Mon Jul 22 19:32:30 2024 00:28:12.610 write: IOPS=665, BW=166MiB/s (174MB/s)(1677MiB/10083msec); 0 zone resets 00:28:12.610 slat (usec): min=27, max=29657, avg=1401.14, stdev=2614.04 00:28:12.610 clat (msec): min=4, max=172, avg=94.77, stdev=24.00 00:28:12.610 lat (msec): min=5, max=172, avg=96.17, stdev=24.34 00:28:12.610 clat percentiles (msec): 00:28:12.610 | 1.00th=[ 19], 5.00th=[ 49], 10.00th=[ 69], 20.00th=[ 73], 00:28:12.610 | 30.00th=[ 88], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 108], 00:28:12.610 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 117], 95.00th=[ 120], 00:28:12.610 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 165], 99.95th=[ 167], 00:28:12.610 | 99.99th=[ 174] 00:28:12.610 bw ( KiB/s): min=124928, max=241152, per=9.68%, avg=170112.00, stdev=33108.49, samples=20 00:28:12.610 iops : min= 488, max= 942, avg=664.50, stdev=129.33, samples=20 00:28:12.610 lat (msec) : 10=0.31%, 20=0.86%, 50=3.98%, 100=49.91%, 250=44.93% 00:28:12.610 cpu : usr=1.59%, sys=1.90%, ctx=2181, majf=0, minf=1 00:28:12.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:12.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.610 issued rwts: total=0,6708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.610 job6: (groupid=0, jobs=1): err= 0: pid=3006590: Mon Jul 22 19:32:30 2024 00:28:12.610 write: IOPS=434, BW=109MiB/s (114MB/s)(1096MiB/10102msec); 0 zone resets 00:28:12.610 slat (usec): min=27, max=147851, avg=2201.72, stdev=4994.21 00:28:12.610 clat (msec): min=7, max=329, avg=145.06, stdev=40.47 00:28:12.610 lat (msec): min=7, max=329, avg=147.26, stdev=40.96 00:28:12.610 clat percentiles (msec): 00:28:12.610 | 1.00th=[ 39], 5.00th=[ 90], 10.00th=[ 101], 20.00th=[ 104], 00:28:12.610 | 30.00th=[ 110], 40.00th=[ 146], 50.00th=[ 163], 60.00th=[ 167], 00:28:12.610 | 70.00th=[ 171], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:28:12.610 | 99.00th=[ 222], 99.50th=[ 255], 99.90th=[ 321], 99.95th=[ 321], 00:28:12.610 | 99.99th=[ 330] 00:28:12.610 bw ( KiB/s): min=76288, max=173568, per=6.29%, avg=110643.20, stdev=29633.16, samples=20 00:28:12.610 iops : min= 298, max= 678, avg=432.20, stdev=115.75, samples=20 00:28:12.610 lat (msec) : 10=0.05%, 20=0.34%, 50=1.80%, 100=8.14%, 250=89.14% 00:28:12.610 lat (msec) : 500=0.52% 00:28:12.610 cpu : usr=0.97%, sys=1.24%, ctx=1340, majf=0, minf=1 00:28:12.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:12.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.610 issued rwts: total=0,4385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.610 job7: (groupid=0, jobs=1): err= 0: pid=3006597: Mon Jul 22 19:32:30 2024 00:28:12.610 write: IOPS=643, BW=161MiB/s (169MB/s)(1625MiB/10092msec); 0 zone resets 00:28:12.610 slat (usec): min=25, max=22976, avg=1419.22, stdev=2655.34 00:28:12.610 clat (msec): min=5, max=181, avg=97.93, stdev=21.65 00:28:12.610 lat (msec): min=6, max=181, avg=99.35, stdev=21.92 00:28:12.610 clat percentiles (msec): 00:28:12.610 | 1.00th=[ 20], 5.00th=[ 56], 10.00th=[ 72], 20.00th=[ 88], 00:28:12.610 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 100], 60.00th=[ 109], 00:28:12.610 | 70.00th=[ 115], 80.00th=[ 116], 90.00th=[ 117], 95.00th=[ 120], 00:28:12.610 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 171], 99.95th=[ 176], 00:28:12.610 | 99.99th=[ 182] 00:28:12.610 bw ( KiB/s): min=133120, max=224256, per=9.37%, avg=164761.60, stdev=25477.69, samples=20 00:28:12.610 iops : min= 520, max= 876, avg=643.60, stdev=99.52, samples=20 00:28:12.610 lat (msec) : 10=0.23%, 20=0.83%, 50=3.20%, 100=48.88%, 250=46.85% 00:28:12.610 cpu : usr=1.39%, sys=1.98%, ctx=2166, majf=0, minf=1 00:28:12.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:28:12.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.610 issued rwts: total=0,6499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.610 job8: (groupid=0, jobs=1): err= 0: pid=3006604: Mon Jul 22 19:32:30 2024 00:28:12.610 write: IOPS=623, BW=156MiB/s (164MB/s)(1574MiB/10089msec); 0 zone resets 00:28:12.610 slat (usec): min=14, max=82493, avg=1503.99, stdev=3224.50 00:28:12.610 clat (msec): min=2, max=236, avg=101.02, stdev=28.04 00:28:12.610 lat (msec): min=2, max=236, avg=102.53, stdev=28.35 00:28:12.610 clat percentiles (msec): 00:28:12.610 | 1.00th=[ 28], 5.00th=[ 63], 10.00th=[ 70], 20.00th=[ 77], 00:28:12.610 | 30.00th=[ 91], 40.00th=[ 94], 50.00th=[ 97], 60.00th=[ 110], 00:28:12.610 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 122], 95.00th=[ 161], 00:28:12.610 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 228], 99.95th=[ 228], 00:28:12.610 | 99.99th=[ 236] 00:28:12.610 bw ( KiB/s): min=91648, max=224256, per=9.08%, avg=159513.60, stdev=32333.32, samples=20 00:28:12.610 iops : min= 358, max= 876, avg=623.10, stdev=126.30, samples=20 00:28:12.610 lat (msec) : 4=0.03%, 10=0.21%, 20=0.33%, 50=2.02%, 100=49.14% 00:28:12.610 lat (msec) : 250=48.27% 00:28:12.610 cpu : usr=1.51%, sys=2.06%, ctx=1924, majf=0, minf=1 00:28:12.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:12.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.610 issued rwts: total=0,6294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.610 job9: (groupid=0, jobs=1): err= 0: pid=3006605: Mon Jul 22 19:32:30 2024 00:28:12.610 write: IOPS=629, BW=157MiB/s (165MB/s)(1586MiB/10082msec); 0 zone resets 00:28:12.610 slat (usec): min=23, max=45029, avg=1565.57, stdev=2774.77 00:28:12.610 clat (msec): min=31, max=172, avg=100.01, stdev=17.71 00:28:12.610 lat (msec): min=31, max=172, avg=101.57, stdev=17.79 00:28:12.610 clat percentiles (msec): 00:28:12.610 | 1.00th=[ 66], 5.00th=[ 68], 10.00th=[ 71], 20.00th=[ 88], 00:28:12.610 | 30.00th=[ 93], 40.00th=[ 94], 50.00th=[ 100], 60.00th=[ 110], 00:28:12.610 | 70.00th=[ 115], 80.00th=[ 116], 90.00th=[ 117], 95.00th=[ 121], 00:28:12.610 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 167], 00:28:12.610 | 99.99th=[ 174] 00:28:12.610 bw ( KiB/s): min=129024, max=232448, per=9.15%, avg=160810.15, stdev=25862.45, samples=20 00:28:12.610 iops : min= 504, max= 908, avg=628.15, stdev=101.02, samples=20 00:28:12.610 lat (msec) : 50=0.33%, 100=50.17%, 250=49.50% 00:28:12.610 cpu : usr=1.44%, sys=2.03%, ctx=1652, majf=0, minf=1 00:28:12.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:12.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.610 issued rwts: total=0,6344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.610 job10: (groupid=0, jobs=1): err= 0: pid=3006606: Mon Jul 22 19:32:30 2024 00:28:12.610 write: IOPS=492, BW=123MiB/s (129MB/s)(1244MiB/10092msec); 0 zone resets 00:28:12.610 slat (usec): min=24, max=22146, avg=1902.31, stdev=3702.55 00:28:12.610 clat (msec): min=4, max=185, avg=127.91, stdev=43.95 00:28:12.610 lat (msec): min=4, max=186, avg=129.82, stdev=44.62 00:28:12.610 clat percentiles (msec): 00:28:12.610 | 1.00th=[ 23], 5.00th=[ 52], 10.00th=[ 71], 20.00th=[ 77], 00:28:12.610 | 30.00th=[ 96], 40.00th=[ 123], 50.00th=[ 132], 60.00th=[ 159], 00:28:12.610 | 70.00th=[ 167], 80.00th=[ 171], 90.00th=[ 176], 95.00th=[ 180], 00:28:12.610 | 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 186], 99.95th=[ 186], 00:28:12.610 | 99.99th=[ 186] 00:28:12.610 bw ( KiB/s): min=92160, max=212480, per=7.15%, avg=125721.60, stdev=42781.39, samples=20 00:28:12.610 iops : min= 360, max= 830, avg=491.10, stdev=167.11, samples=20 00:28:12.610 lat (msec) : 10=0.18%, 20=0.62%, 50=3.74%, 100=27.70%, 250=67.75% 00:28:12.610 cpu : usr=1.19%, sys=1.35%, ctx=1654, majf=0, minf=1 00:28:12.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:28:12.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:12.610 issued rwts: total=0,4974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:12.610 00:28:12.610 Run status group 0 (all jobs): 00:28:12.610 WRITE: bw=1716MiB/s (1800MB/s), 102MiB/s-254MiB/s (107MB/s-267MB/s), io=16.9GiB (18.2GB), run=10082-10104msec 00:28:12.610 00:28:12.610 Disk stats (read/write): 00:28:12.610 nvme0n1: ios=49/15466, merge=0/0, ticks=98/1227333, in_queue=1227431, util=96.96% 00:28:12.610 nvme10n1: ios=47/20184, merge=0/0, ticks=447/1198992, in_queue=1199439, util=97.69% 00:28:12.610 nvme1n1: ios=23/15454, merge=0/0, ticks=65/1226648, in_queue=1226713, util=97.20% 00:28:12.610 nvme2n1: ios=0/8277, merge=0/0, ticks=0/1198602, in_queue=1198602, util=97.17% 00:28:12.610 nvme3n1: ios=41/7927, merge=0/0, ticks=1490/1196299, in_queue=1197789, util=99.87% 00:28:12.610 nvme4n1: ios=0/13076, merge=0/0, ticks=0/1200963, in_queue=1200963, util=97.73% 00:28:12.610 nvme5n1: ios=44/8757, merge=0/0, ticks=3128/1220645, in_queue=1223773, util=99.93% 00:28:12.610 nvme6n1: ios=0/12662, merge=0/0, ticks=0/1202398, in_queue=1202398, util=98.11% 00:28:12.611 nvme7n1: ios=40/12253, merge=0/0, ticks=2250/1191620, in_queue=1193870, util=99.95% 00:28:12.611 nvme8n1: ios=32/12348, merge=0/0, ticks=1222/1195678, in_queue=1196900, util=100.00% 00:28:12.611 nvme9n1: ios=0/9610, merge=0/0, ticks=0/1200955, in_queue=1200955, util=99.08% 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:12.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:12.611 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:13.182 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:13.182 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:13.182 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:13.182 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:13.182 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:28:13.182 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:13.182 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:28:13.182 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:13.182 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:13.183 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.183 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:13.183 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.183 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:13.183 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:13.755 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:13.755 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:14.016 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:14.016 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:14.016 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:14.016 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:14.016 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:28:14.016 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:14.016 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:28:14.016 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:14.016 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:14.017 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.017 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:14.017 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.017 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:14.017 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:14.587 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:14.587 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:14.848 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:14.848 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:14.848 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:14.848 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:14.848 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:28:15.110 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:15.110 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:28:15.110 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:15.110 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:15.110 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.110 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:15.110 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.110 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.110 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:15.372 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.372 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:15.634 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.634 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:15.896 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.896 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:16.470 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:16.470 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:16.731 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:16.731 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:16.731 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:16.732 rmmod nvme_tcp 00:28:16.732 rmmod nvme_fabrics 00:28:16.732 rmmod nvme_keyring 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2995724 ']' 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2995724 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 2995724 ']' 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 2995724 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2995724 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2995724' 00:28:16.732 killing process with pid 2995724 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 2995724 00:28:16.732 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 2995724 00:28:19.281 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:19.281 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:19.281 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:19.281 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:19.281 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:19.281 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.281 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.281 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.199 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:21.199 00:28:21.199 real 1m21.559s 00:28:21.199 user 5m5.012s 00:28:21.199 sys 0m23.650s 00:28:21.199 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:21.199 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:21.199 ************************************ 00:28:21.199 END TEST nvmf_multiconnection 00:28:21.199 ************************************ 00:28:21.199 19:32:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:28:21.199 19:32:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:21.199 19:32:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:21.199 19:32:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.199 19:32:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:21.199 ************************************ 00:28:21.199 START TEST nvmf_initiator_timeout 00:28:21.199 ************************************ 00:28:21.199 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:21.199 * Looking for test storage... 00:28:21.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:28:21.200 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:29.403 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:29.403 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:29.403 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:29.403 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:29.403 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.404 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:29.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:28:29.404 00:28:29.404 --- 10.0.0.2 ping statistics --- 00:28:29.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.404 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:28:29.404 00:28:29.404 --- 10.0.0.1 ping statistics --- 00:28:29.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.404 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3013795 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3013795 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 3013795 ']' 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:29.404 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.404 [2024-07-22 19:32:47.321862] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:29.404 [2024-07-22 19:32:47.321986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.404 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.404 [2024-07-22 19:32:47.459861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.404 [2024-07-22 19:32:47.643908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.404 [2024-07-22 19:32:47.643955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.404 [2024-07-22 19:32:47.643969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.404 [2024-07-22 19:32:47.643979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.404 [2024-07-22 19:32:47.643989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.404 [2024-07-22 19:32:47.644172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.404 [2024-07-22 19:32:47.644256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.404 [2024-07-22 19:32:47.644351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.404 [2024-07-22 19:32:47.644377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.404 Malloc0 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.404 Delay0 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.404 [2024-07-22 19:32:48.181374] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.404 [2024-07-22 19:32:48.221706] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.404 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.405 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:30.789 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:30.789 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:28:30.789 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:30.789 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:30.789 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:28:33.333 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:33.333 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:33.333 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:28:33.333 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:33.333 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:33.333 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:28:33.333 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3014603 00:28:33.333 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:33.333 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:33.333 [global] 00:28:33.333 thread=1 00:28:33.333 invalidate=1 00:28:33.333 rw=write 00:28:33.333 time_based=1 00:28:33.333 runtime=60 00:28:33.333 ioengine=libaio 00:28:33.333 direct=1 00:28:33.333 bs=4096 00:28:33.333 iodepth=1 00:28:33.333 norandommap=0 00:28:33.333 numjobs=1 00:28:33.333 00:28:33.333 verify_dump=1 00:28:33.333 verify_backlog=512 00:28:33.333 verify_state_save=0 00:28:33.333 do_verify=1 00:28:33.333 verify=crc32c-intel 00:28:33.333 [job0] 00:28:33.333 filename=/dev/nvme0n1 00:28:33.333 Could not set queue depth (nvme0n1) 00:28:33.333 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:33.333 fio-3.35 00:28:33.333 Starting 1 thread 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:35.878 true 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:35.878 true 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:35.878 true 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:35.878 true 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.878 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:39.177 true 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:39.177 true 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:39.177 true 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:39.177 true 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:39.177 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3014603 00:29:35.442 00:29:35.442 job0: (groupid=0, jobs=1): err= 0: pid=3014796: Mon Jul 22 19:33:52 2024 00:29:35.442 read: IOPS=171, BW=686KiB/s (702kB/s)(40.2MiB/60018msec) 00:29:35.442 slat (usec): min=6, max=15162, avg=28.00, stdev=185.60 00:29:35.442 clat (usec): min=371, max=42124k, avg=5130.37, stdev=415318.60 00:29:35.442 lat (usec): min=378, max=42124k, avg=5158.37, stdev=415318.63 00:29:35.442 clat percentiles (usec): 00:29:35.442 | 1.00th=[ 578], 5.00th=[ 652], 10.00th=[ 701], 20.00th=[ 750], 00:29:35.442 | 30.00th=[ 775], 40.00th=[ 816], 50.00th=[ 832], 60.00th=[ 857], 00:29:35.442 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 1090], 95.00th=[ 1205], 00:29:35.442 | 99.00th=[ 1287], 99.50th=[ 1663], 99.90th=[42206], 99.95th=[42206], 00:29:35.442 | 99.99th=[43779] 00:29:35.442 write: IOPS=179, BW=717KiB/s (734kB/s)(42.0MiB/60018msec); 0 zone resets 00:29:35.442 slat (usec): min=8, max=29202, avg=32.13, stdev=281.54 00:29:35.442 clat (usec): min=194, max=4360, avg=599.43, stdev=112.78 00:29:35.442 lat (usec): min=221, max=29924, avg=631.56, stdev=305.92 00:29:35.442 clat percentiles (usec): 00:29:35.442 | 1.00th=[ 355], 5.00th=[ 408], 10.00th=[ 457], 20.00th=[ 502], 00:29:35.442 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:29:35.442 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 758], 00:29:35.442 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 938], 99.95th=[ 996], 00:29:35.442 | 99.99th=[ 1270] 00:29:35.442 bw ( KiB/s): min= 344, max= 4096, per=100.00%, avg=2867.20, stdev=1202.05, samples=30 00:29:35.442 iops : min= 86, max= 1024, avg=716.80, stdev=300.51, samples=30 00:29:35.442 lat (usec) : 250=0.06%, 500=10.18%, 750=47.76%, 1000=36.39% 00:29:35.442 lat (msec) : 2=5.38%, 10=0.01%, 50=0.22%, >=2000=0.01% 00:29:35.442 cpu : usr=0.70%, sys=1.32%, ctx=21048, majf=0, minf=36 00:29:35.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.442 issued rwts: total=10287,10752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:35.442 00:29:35.442 Run status group 0 (all jobs): 00:29:35.442 READ: bw=686KiB/s (702kB/s), 686KiB/s-686KiB/s (702kB/s-702kB/s), io=40.2MiB (42.1MB), run=60018-60018msec 00:29:35.442 WRITE: bw=717KiB/s (734kB/s), 717KiB/s-717KiB/s (734kB/s-734kB/s), io=42.0MiB (44.0MB), run=60018-60018msec 00:29:35.442 00:29:35.442 Disk stats (read/write): 00:29:35.442 nvme0n1: ios=10338/10752, merge=0/0, ticks=10753/5328, in_queue=16081, util=99.60% 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:35.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:35.442 nvmf hotplug test: fio successful as expected 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:35.442 rmmod nvme_tcp 00:29:35.442 rmmod nvme_fabrics 00:29:35.442 rmmod nvme_keyring 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:29:35.442 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3013795 ']' 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3013795 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 3013795 ']' 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 3013795 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3013795 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3013795' 00:29:35.443 killing process with pid 3013795 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 3013795 00:29:35.443 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 3013795 00:29:35.443 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:35.443 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:35.443 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:35.443 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:35.443 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:35.443 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.443 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.443 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.830 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:36.830 00:29:36.830 real 1m15.801s 00:29:36.830 user 4m41.639s 00:29:36.830 sys 0m7.843s 00:29:36.830 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.830 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:36.830 ************************************ 00:29:36.830 END TEST nvmf_initiator_timeout 00:29:36.830 ************************************ 00:29:37.091 19:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:29:37.091 19:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:29:37.091 19:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:29:37.091 19:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:29:37.091 19:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:29:37.091 19:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:43.685 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.685 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:29:43.685 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:43.686 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:43.686 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:43.686 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:43.686 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:43.686 ************************************ 00:29:43.686 START TEST nvmf_perf_adq 00:29:43.686 ************************************ 00:29:43.686 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:43.948 * Looking for test storage... 00:29:43.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:43.948 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:50.605 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:50.605 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:50.605 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:50.605 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:29:50.605 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:52.518 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:54.453 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:59.747 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:59.747 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.747 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:59.748 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:59.748 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:59.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:29:59.748 00:29:59.748 --- 10.0.0.2 ping statistics --- 00:29:59.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.748 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:29:59.748 00:29:59.748 --- 10.0.0.1 ping statistics --- 00:29:59.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.748 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3036318 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3036318 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3036318 ']' 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:59.748 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:59.748 [2024-07-22 19:34:18.465334] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:59.748 [2024-07-22 19:34:18.465467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.748 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.748 [2024-07-22 19:34:18.603558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.011 [2024-07-22 19:34:18.787249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.011 [2024-07-22 19:34:18.787293] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.011 [2024-07-22 19:34:18.787306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.011 [2024-07-22 19:34:18.787315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.011 [2024-07-22 19:34:18.787325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.011 [2024-07-22 19:34:18.787519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.011 [2024-07-22 19:34:18.787604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.011 [2024-07-22 19:34:18.787719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.011 [2024-07-22 19:34:18.787745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.272 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:00.272 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:30:00.272 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:00.272 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:00.272 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.533 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.794 [2024-07-22 19:34:19.564358] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.794 Malloc1 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.794 [2024-07-22 19:34:19.660999] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3036672 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:30:00.794 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:00.794 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:30:03.323 "tick_rate": 2400000000, 00:30:03.323 "poll_groups": [ 00:30:03.323 { 00:30:03.323 "name": "nvmf_tgt_poll_group_000", 00:30:03.323 "admin_qpairs": 1, 00:30:03.323 "io_qpairs": 1, 00:30:03.323 "current_admin_qpairs": 1, 00:30:03.323 "current_io_qpairs": 1, 00:30:03.323 "pending_bdev_io": 0, 00:30:03.323 "completed_nvme_io": 21076, 00:30:03.323 "transports": [ 00:30:03.323 { 00:30:03.323 "trtype": "TCP" 00:30:03.323 } 00:30:03.323 ] 00:30:03.323 }, 00:30:03.323 { 00:30:03.323 "name": "nvmf_tgt_poll_group_001", 00:30:03.323 "admin_qpairs": 0, 00:30:03.323 "io_qpairs": 1, 00:30:03.323 "current_admin_qpairs": 0, 00:30:03.323 "current_io_qpairs": 1, 00:30:03.323 "pending_bdev_io": 0, 00:30:03.323 "completed_nvme_io": 27807, 00:30:03.323 "transports": [ 00:30:03.323 { 00:30:03.323 "trtype": "TCP" 00:30:03.323 } 00:30:03.323 ] 00:30:03.323 }, 00:30:03.323 { 00:30:03.323 "name": "nvmf_tgt_poll_group_002", 00:30:03.323 "admin_qpairs": 0, 00:30:03.323 "io_qpairs": 1, 00:30:03.323 "current_admin_qpairs": 0, 00:30:03.323 "current_io_qpairs": 1, 00:30:03.323 "pending_bdev_io": 0, 00:30:03.323 "completed_nvme_io": 20767, 00:30:03.323 "transports": [ 00:30:03.323 { 00:30:03.323 "trtype": "TCP" 00:30:03.323 } 00:30:03.323 ] 00:30:03.323 }, 00:30:03.323 { 00:30:03.323 "name": "nvmf_tgt_poll_group_003", 00:30:03.323 "admin_qpairs": 0, 00:30:03.323 "io_qpairs": 1, 00:30:03.323 "current_admin_qpairs": 0, 00:30:03.323 "current_io_qpairs": 1, 00:30:03.323 "pending_bdev_io": 0, 00:30:03.323 "completed_nvme_io": 20571, 00:30:03.323 "transports": [ 00:30:03.323 { 00:30:03.323 "trtype": "TCP" 00:30:03.323 } 00:30:03.323 ] 00:30:03.323 } 00:30:03.323 ] 00:30:03.323 }' 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:30:03.323 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3036672 00:30:11.429 Initializing NVMe Controllers 00:30:11.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:11.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:11.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:11.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:11.429 Initialization complete. Launching workers. 00:30:11.429 ======================================================== 00:30:11.429 Latency(us) 00:30:11.429 Device Information : IOPS MiB/s Average min max 00:30:11.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11439.49 44.69 5594.43 1177.38 8416.16 00:30:11.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15509.55 60.58 4125.94 1024.04 9955.04 00:30:11.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14103.56 55.09 4537.93 1203.42 11205.67 00:30:11.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14080.16 55.00 4545.04 1138.60 10845.22 00:30:11.429 ======================================================== 00:30:11.429 Total : 55132.75 215.36 4643.06 1024.04 11205.67 00:30:11.429 00:30:11.429 [2024-07-22 19:34:29.870835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:11.429 rmmod nvme_tcp 00:30:11.429 rmmod nvme_fabrics 00:30:11.429 rmmod nvme_keyring 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3036318 ']' 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3036318 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3036318 ']' 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3036318 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:11.429 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3036318 00:30:11.429 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:11.429 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:11.429 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3036318' 00:30:11.429 killing process with pid 3036318 00:30:11.429 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3036318 00:30:11.429 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3036318 00:30:12.372 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:12.372 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:12.372 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:12.372 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.372 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:12.372 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.372 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.372 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.286 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:14.286 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:30:14.286 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:30:15.667 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:30:18.209 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:23.503 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:23.503 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:23.503 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:23.503 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.503 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:23.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:30:23.504 00:30:23.504 --- 10.0.0.2 ping statistics --- 00:30:23.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.504 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:30:23.504 00:30:23.504 --- 10.0.0.1 ping statistics --- 00:30:23.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.504 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:30:23.504 net.core.busy_poll = 1 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:30:23.504 net.core.busy_read = 1 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:30:23.504 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3041140 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3041140 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3041140 ']' 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:23.504 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:23.504 [2024-07-22 19:34:42.239172] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:23.504 [2024-07-22 19:34:42.239281] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.504 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.504 [2024-07-22 19:34:42.361668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:23.819 [2024-07-22 19:34:42.545710] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.819 [2024-07-22 19:34:42.545754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.819 [2024-07-22 19:34:42.545767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.819 [2024-07-22 19:34:42.545777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.819 [2024-07-22 19:34:42.545787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.819 [2024-07-22 19:34:42.545992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.819 [2024-07-22 19:34:42.546087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.819 [2024-07-22 19:34:42.546208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.819 [2024-07-22 19:34:42.546242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.093 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:24.093 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:30:24.093 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:24.093 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:24.093 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:24.093 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.093 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:30:24.093 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:24.093 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:24.093 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.093 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:24.093 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.354 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:24.354 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:30:24.354 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.355 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:24.355 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.355 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:24.355 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.355 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:24.355 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.355 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:30:24.355 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.355 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:24.355 [2024-07-22 19:34:43.307407] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:24.615 Malloc1 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:24.615 [2024-07-22 19:34:43.404232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3041498 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:30:24.615 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:24.615 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.523 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:30:26.523 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.523 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:26.523 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.523 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:30:26.523 "tick_rate": 2400000000, 00:30:26.523 "poll_groups": [ 00:30:26.523 { 00:30:26.523 "name": "nvmf_tgt_poll_group_000", 00:30:26.523 "admin_qpairs": 1, 00:30:26.523 "io_qpairs": 2, 00:30:26.523 "current_admin_qpairs": 1, 00:30:26.523 "current_io_qpairs": 2, 00:30:26.523 "pending_bdev_io": 0, 00:30:26.523 "completed_nvme_io": 25844, 00:30:26.523 "transports": [ 00:30:26.523 { 00:30:26.523 "trtype": "TCP" 00:30:26.523 } 00:30:26.523 ] 00:30:26.523 }, 00:30:26.523 { 00:30:26.523 "name": "nvmf_tgt_poll_group_001", 00:30:26.523 "admin_qpairs": 0, 00:30:26.523 "io_qpairs": 2, 00:30:26.523 "current_admin_qpairs": 0, 00:30:26.523 "current_io_qpairs": 2, 00:30:26.523 "pending_bdev_io": 0, 00:30:26.523 "completed_nvme_io": 36220, 00:30:26.523 "transports": [ 00:30:26.523 { 00:30:26.523 "trtype": "TCP" 00:30:26.523 } 00:30:26.523 ] 00:30:26.523 }, 00:30:26.523 { 00:30:26.523 "name": "nvmf_tgt_poll_group_002", 00:30:26.523 "admin_qpairs": 0, 00:30:26.523 "io_qpairs": 0, 00:30:26.523 "current_admin_qpairs": 0, 00:30:26.523 "current_io_qpairs": 0, 00:30:26.523 "pending_bdev_io": 0, 00:30:26.524 "completed_nvme_io": 0, 00:30:26.524 "transports": [ 00:30:26.524 { 00:30:26.524 "trtype": "TCP" 00:30:26.524 } 00:30:26.524 ] 00:30:26.524 }, 00:30:26.524 { 00:30:26.524 "name": "nvmf_tgt_poll_group_003", 00:30:26.524 "admin_qpairs": 0, 00:30:26.524 "io_qpairs": 0, 00:30:26.524 "current_admin_qpairs": 0, 00:30:26.524 "current_io_qpairs": 0, 00:30:26.524 "pending_bdev_io": 0, 00:30:26.524 "completed_nvme_io": 0, 00:30:26.524 "transports": [ 00:30:26.524 { 00:30:26.524 "trtype": "TCP" 00:30:26.524 } 00:30:26.524 ] 00:30:26.524 } 00:30:26.524 ] 00:30:26.524 }' 00:30:26.524 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:30:26.524 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:30:26.784 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:30:26.784 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:30:26.784 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3041498 00:30:34.962 Initializing NVMe Controllers 00:30:34.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:34.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:34.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:34.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:34.962 Initialization complete. Launching workers. 00:30:34.962 ======================================================== 00:30:34.962 Latency(us) 00:30:34.962 Device Information : IOPS MiB/s Average min max 00:30:34.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11721.50 45.79 5459.88 1325.22 49407.47 00:30:34.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7697.30 30.07 8314.04 1451.05 51452.45 00:30:34.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8566.10 33.46 7494.38 1453.41 52719.89 00:30:34.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9806.50 38.31 6527.90 1480.87 51835.77 00:30:34.962 ======================================================== 00:30:34.962 Total : 37791.40 147.62 6779.51 1325.22 52719.89 00:30:34.962 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:34.962 rmmod nvme_tcp 00:30:34.962 rmmod nvme_fabrics 00:30:34.962 rmmod nvme_keyring 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3041140 ']' 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3041140 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3041140 ']' 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3041140 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3041140 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3041140' 00:30:34.962 killing process with pid 3041140 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3041140 00:30:34.962 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3041140 00:30:35.904 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:35.904 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:35.904 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:35.904 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:35.904 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:35.905 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.905 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.905 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:38.452 00:30:38.452 real 0m54.252s 00:30:38.452 user 2m54.504s 00:30:38.452 sys 0m10.992s 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.452 ************************************ 00:30:38.452 END TEST nvmf_perf_adq 00:30:38.452 ************************************ 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:38.452 ************************************ 00:30:38.452 START TEST nvmf_shutdown 00:30:38.452 ************************************ 00:30:38.452 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:38.452 * Looking for test storage... 00:30:38.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.452 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:38.453 ************************************ 00:30:38.453 START TEST nvmf_shutdown_tc1 00:30:38.453 ************************************ 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:38.453 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:30:45.044 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:45.045 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:45.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:45.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:45.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.045 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.307 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.307 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.307 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:45.307 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.307 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.307 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.307 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:45.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:30:45.307 00:30:45.307 --- 10.0.0.2 ping statistics --- 00:30:45.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.307 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:30:45.307 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:30:45.307 00:30:45.307 --- 10.0.0.1 ping statistics --- 00:30:45.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.308 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:30:45.308 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.308 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:30:45.308 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3047791 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3047791 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3047791 ']' 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:45.569 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:45.569 [2024-07-22 19:35:04.413644] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:45.569 [2024-07-22 19:35:04.413777] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.569 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.830 [2024-07-22 19:35:04.569390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:46.091 [2024-07-22 19:35:04.802705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.091 [2024-07-22 19:35:04.802765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.091 [2024-07-22 19:35:04.802780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.091 [2024-07-22 19:35:04.802792] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.091 [2024-07-22 19:35:04.802803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.091 [2024-07-22 19:35:04.802971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.091 [2024-07-22 19:35:04.803125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:46.091 [2024-07-22 19:35:04.803258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.091 [2024-07-22 19:35:04.803286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:46.351 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:46.351 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:30:46.351 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:46.351 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:46.351 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:46.351 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.351 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:46.351 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.351 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:46.351 [2024-07-22 19:35:05.208574] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.352 19:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:46.612 Malloc1 00:30:46.612 [2024-07-22 19:35:05.349366] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.612 Malloc2 00:30:46.612 Malloc3 00:30:46.612 Malloc4 00:30:46.872 Malloc5 00:30:46.872 Malloc6 00:30:46.872 Malloc7 00:30:47.132 Malloc8 00:30:47.132 Malloc9 00:30:47.132 Malloc10 00:30:47.132 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.132 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:47.132 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:47.132 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:47.393 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3048171 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3048171 /var/tmp/bdevperf.sock 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3048171 ']' 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:47.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.394 { 00:30:47.394 "params": { 00:30:47.394 "name": "Nvme$subsystem", 00:30:47.394 "trtype": "$TEST_TRANSPORT", 00:30:47.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.394 "adrfam": "ipv4", 00:30:47.394 "trsvcid": "$NVMF_PORT", 00:30:47.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.394 "hdgst": ${hdgst:-false}, 00:30:47.394 "ddgst": ${ddgst:-false} 00:30:47.394 }, 00:30:47.394 "method": "bdev_nvme_attach_controller" 00:30:47.394 } 00:30:47.394 EOF 00:30:47.394 )") 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.394 { 00:30:47.394 "params": { 00:30:47.394 "name": "Nvme$subsystem", 00:30:47.394 "trtype": "$TEST_TRANSPORT", 00:30:47.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.394 "adrfam": "ipv4", 00:30:47.394 "trsvcid": "$NVMF_PORT", 00:30:47.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.394 "hdgst": ${hdgst:-false}, 00:30:47.394 "ddgst": ${ddgst:-false} 00:30:47.394 }, 00:30:47.394 "method": "bdev_nvme_attach_controller" 00:30:47.394 } 00:30:47.394 EOF 00:30:47.394 )") 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.394 { 00:30:47.394 "params": { 00:30:47.394 "name": "Nvme$subsystem", 00:30:47.394 "trtype": "$TEST_TRANSPORT", 00:30:47.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.394 "adrfam": "ipv4", 00:30:47.394 "trsvcid": "$NVMF_PORT", 00:30:47.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.394 "hdgst": ${hdgst:-false}, 00:30:47.394 "ddgst": ${ddgst:-false} 00:30:47.394 }, 00:30:47.394 "method": "bdev_nvme_attach_controller" 00:30:47.394 } 00:30:47.394 EOF 00:30:47.394 )") 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.394 { 00:30:47.394 "params": { 00:30:47.394 "name": "Nvme$subsystem", 00:30:47.394 "trtype": "$TEST_TRANSPORT", 00:30:47.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.394 "adrfam": "ipv4", 00:30:47.394 "trsvcid": "$NVMF_PORT", 00:30:47.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.394 "hdgst": ${hdgst:-false}, 00:30:47.394 "ddgst": ${ddgst:-false} 00:30:47.394 }, 00:30:47.394 "method": "bdev_nvme_attach_controller" 00:30:47.394 } 00:30:47.394 EOF 00:30:47.394 )") 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.394 { 00:30:47.394 "params": { 00:30:47.394 "name": "Nvme$subsystem", 00:30:47.394 "trtype": "$TEST_TRANSPORT", 00:30:47.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.394 "adrfam": "ipv4", 00:30:47.394 "trsvcid": "$NVMF_PORT", 00:30:47.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.394 "hdgst": ${hdgst:-false}, 00:30:47.394 "ddgst": ${ddgst:-false} 00:30:47.394 }, 00:30:47.394 "method": "bdev_nvme_attach_controller" 00:30:47.394 } 00:30:47.394 EOF 00:30:47.394 )") 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.394 { 00:30:47.394 "params": { 00:30:47.394 "name": "Nvme$subsystem", 00:30:47.394 "trtype": "$TEST_TRANSPORT", 00:30:47.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.394 "adrfam": "ipv4", 00:30:47.394 "trsvcid": "$NVMF_PORT", 00:30:47.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.394 "hdgst": ${hdgst:-false}, 00:30:47.394 "ddgst": ${ddgst:-false} 00:30:47.394 }, 00:30:47.394 "method": "bdev_nvme_attach_controller" 00:30:47.394 } 00:30:47.394 EOF 00:30:47.394 )") 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.394 { 00:30:47.394 "params": { 00:30:47.394 "name": "Nvme$subsystem", 00:30:47.394 "trtype": "$TEST_TRANSPORT", 00:30:47.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.394 "adrfam": "ipv4", 00:30:47.394 "trsvcid": "$NVMF_PORT", 00:30:47.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.394 "hdgst": ${hdgst:-false}, 00:30:47.394 "ddgst": ${ddgst:-false} 00:30:47.394 }, 00:30:47.394 "method": "bdev_nvme_attach_controller" 00:30:47.394 } 00:30:47.394 EOF 00:30:47.394 )") 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.394 { 00:30:47.394 "params": { 00:30:47.394 "name": "Nvme$subsystem", 00:30:47.394 "trtype": "$TEST_TRANSPORT", 00:30:47.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.394 "adrfam": "ipv4", 00:30:47.394 "trsvcid": "$NVMF_PORT", 00:30:47.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.394 "hdgst": ${hdgst:-false}, 00:30:47.394 "ddgst": ${ddgst:-false} 00:30:47.394 }, 00:30:47.394 "method": "bdev_nvme_attach_controller" 00:30:47.394 } 00:30:47.394 EOF 00:30:47.394 )") 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.394 { 00:30:47.394 "params": { 00:30:47.394 "name": "Nvme$subsystem", 00:30:47.394 "trtype": "$TEST_TRANSPORT", 00:30:47.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.394 "adrfam": "ipv4", 00:30:47.394 "trsvcid": "$NVMF_PORT", 00:30:47.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.394 "hdgst": ${hdgst:-false}, 00:30:47.394 "ddgst": ${ddgst:-false} 00:30:47.394 }, 00:30:47.394 "method": "bdev_nvme_attach_controller" 00:30:47.394 } 00:30:47.394 EOF 00:30:47.394 )") 00:30:47.394 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.395 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:47.395 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:47.395 { 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme$subsystem", 00:30:47.395 "trtype": "$TEST_TRANSPORT", 00:30:47.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "$NVMF_PORT", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.395 "hdgst": ${hdgst:-false}, 00:30:47.395 "ddgst": ${ddgst:-false} 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 } 00:30:47.395 EOF 00:30:47.395 )") 00:30:47.395 [2024-07-22 19:35:06.169568] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:47.395 [2024-07-22 19:35:06.169670] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:47.395 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:47.395 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:47.395 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:47.395 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme1", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 },{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme2", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 },{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme3", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 },{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme4", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 },{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme5", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 },{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme6", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 },{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme7", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 },{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme8", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 },{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme9", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 },{ 00:30:47.395 "params": { 00:30:47.395 "name": "Nvme10", 00:30:47.395 "trtype": "tcp", 00:30:47.395 "traddr": "10.0.0.2", 00:30:47.395 "adrfam": "ipv4", 00:30:47.395 "trsvcid": "4420", 00:30:47.395 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:47.395 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:47.395 "hdgst": false, 00:30:47.395 "ddgst": false 00:30:47.395 }, 00:30:47.395 "method": "bdev_nvme_attach_controller" 00:30:47.395 }' 00:30:47.395 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.395 [2024-07-22 19:35:06.282699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.656 [2024-07-22 19:35:06.461358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.199 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:50.199 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:30:50.199 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:50.199 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.199 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:50.199 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.199 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3048171 00:30:50.199 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:30:50.199 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:30:50.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3048171 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3047791 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.770 { 00:30:50.770 "params": { 00:30:50.770 "name": "Nvme$subsystem", 00:30:50.770 "trtype": "$TEST_TRANSPORT", 00:30:50.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.770 "adrfam": "ipv4", 00:30:50.770 "trsvcid": "$NVMF_PORT", 00:30:50.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.770 "hdgst": ${hdgst:-false}, 00:30:50.770 "ddgst": ${ddgst:-false} 00:30:50.770 }, 00:30:50.770 "method": "bdev_nvme_attach_controller" 00:30:50.770 } 00:30:50.770 EOF 00:30:50.770 )") 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.770 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.770 { 00:30:50.770 "params": { 00:30:50.770 "name": "Nvme$subsystem", 00:30:50.770 "trtype": "$TEST_TRANSPORT", 00:30:50.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.770 "adrfam": "ipv4", 00:30:50.770 "trsvcid": "$NVMF_PORT", 00:30:50.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.770 "hdgst": ${hdgst:-false}, 00:30:50.770 "ddgst": ${ddgst:-false} 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 } 00:30:50.771 EOF 00:30:50.771 )") 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.771 { 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme$subsystem", 00:30:50.771 "trtype": "$TEST_TRANSPORT", 00:30:50.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "$NVMF_PORT", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.771 "hdgst": ${hdgst:-false}, 00:30:50.771 "ddgst": ${ddgst:-false} 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 } 00:30:50.771 EOF 00:30:50.771 )") 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.771 { 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme$subsystem", 00:30:50.771 "trtype": "$TEST_TRANSPORT", 00:30:50.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "$NVMF_PORT", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.771 "hdgst": ${hdgst:-false}, 00:30:50.771 "ddgst": ${ddgst:-false} 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 } 00:30:50.771 EOF 00:30:50.771 )") 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.771 { 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme$subsystem", 00:30:50.771 "trtype": "$TEST_TRANSPORT", 00:30:50.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "$NVMF_PORT", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.771 "hdgst": ${hdgst:-false}, 00:30:50.771 "ddgst": ${ddgst:-false} 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 } 00:30:50.771 EOF 00:30:50.771 )") 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.771 { 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme$subsystem", 00:30:50.771 "trtype": "$TEST_TRANSPORT", 00:30:50.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "$NVMF_PORT", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.771 "hdgst": ${hdgst:-false}, 00:30:50.771 "ddgst": ${ddgst:-false} 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 } 00:30:50.771 EOF 00:30:50.771 )") 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.771 { 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme$subsystem", 00:30:50.771 "trtype": "$TEST_TRANSPORT", 00:30:50.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "$NVMF_PORT", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.771 "hdgst": ${hdgst:-false}, 00:30:50.771 "ddgst": ${ddgst:-false} 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 } 00:30:50.771 EOF 00:30:50.771 )") 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.771 { 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme$subsystem", 00:30:50.771 "trtype": "$TEST_TRANSPORT", 00:30:50.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "$NVMF_PORT", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.771 "hdgst": ${hdgst:-false}, 00:30:50.771 "ddgst": ${ddgst:-false} 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 } 00:30:50.771 EOF 00:30:50.771 )") 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.771 { 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme$subsystem", 00:30:50.771 "trtype": "$TEST_TRANSPORT", 00:30:50.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "$NVMF_PORT", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.771 "hdgst": ${hdgst:-false}, 00:30:50.771 "ddgst": ${ddgst:-false} 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 } 00:30:50.771 EOF 00:30:50.771 )") 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.771 { 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme$subsystem", 00:30:50.771 "trtype": "$TEST_TRANSPORT", 00:30:50.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "$NVMF_PORT", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.771 "hdgst": ${hdgst:-false}, 00:30:50.771 "ddgst": ${ddgst:-false} 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 } 00:30:50.771 EOF 00:30:50.771 )") 00:30:50.771 [2024-07-22 19:35:09.682667] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:50.771 [2024-07-22 19:35:09.682781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048838 ] 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:50.771 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme1", 00:30:50.771 "trtype": "tcp", 00:30:50.771 "traddr": "10.0.0.2", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "4420", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.771 "hdgst": false, 00:30:50.771 "ddgst": false 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 },{ 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme2", 00:30:50.771 "trtype": "tcp", 00:30:50.771 "traddr": "10.0.0.2", 00:30:50.771 "adrfam": "ipv4", 00:30:50.771 "trsvcid": "4420", 00:30:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:50.771 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:50.771 "hdgst": false, 00:30:50.771 "ddgst": false 00:30:50.771 }, 00:30:50.771 "method": "bdev_nvme_attach_controller" 00:30:50.771 },{ 00:30:50.771 "params": { 00:30:50.771 "name": "Nvme3", 00:30:50.771 "trtype": "tcp", 00:30:50.771 "traddr": "10.0.0.2", 00:30:50.771 "adrfam": "ipv4", 00:30:50.772 "trsvcid": "4420", 00:30:50.772 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:50.772 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:50.772 "hdgst": false, 00:30:50.772 "ddgst": false 00:30:50.772 }, 00:30:50.772 "method": "bdev_nvme_attach_controller" 00:30:50.772 },{ 00:30:50.772 "params": { 00:30:50.772 "name": "Nvme4", 00:30:50.772 "trtype": "tcp", 00:30:50.772 "traddr": "10.0.0.2", 00:30:50.772 "adrfam": "ipv4", 00:30:50.772 "trsvcid": "4420", 00:30:50.772 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:50.772 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:50.772 "hdgst": false, 00:30:50.772 "ddgst": false 00:30:50.772 }, 00:30:50.772 "method": "bdev_nvme_attach_controller" 00:30:50.772 },{ 00:30:50.772 "params": { 00:30:50.772 "name": "Nvme5", 00:30:50.772 "trtype": "tcp", 00:30:50.772 "traddr": "10.0.0.2", 00:30:50.772 "adrfam": "ipv4", 00:30:50.772 "trsvcid": "4420", 00:30:50.772 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:50.772 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:50.772 "hdgst": false, 00:30:50.772 "ddgst": false 00:30:50.772 }, 00:30:50.772 "method": "bdev_nvme_attach_controller" 00:30:50.772 },{ 00:30:50.772 "params": { 00:30:50.772 "name": "Nvme6", 00:30:50.772 "trtype": "tcp", 00:30:50.772 "traddr": "10.0.0.2", 00:30:50.772 "adrfam": "ipv4", 00:30:50.772 "trsvcid": "4420", 00:30:50.772 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:50.772 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:50.772 "hdgst": false, 00:30:50.772 "ddgst": false 00:30:50.772 }, 00:30:50.772 "method": "bdev_nvme_attach_controller" 00:30:50.772 },{ 00:30:50.772 "params": { 00:30:50.772 "name": "Nvme7", 00:30:50.772 "trtype": "tcp", 00:30:50.772 "traddr": "10.0.0.2", 00:30:50.772 "adrfam": "ipv4", 00:30:50.772 "trsvcid": "4420", 00:30:50.772 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:50.772 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:50.772 "hdgst": false, 00:30:50.772 "ddgst": false 00:30:50.772 }, 00:30:50.772 "method": "bdev_nvme_attach_controller" 00:30:50.772 },{ 00:30:50.772 "params": { 00:30:50.772 "name": "Nvme8", 00:30:50.772 "trtype": "tcp", 00:30:50.772 "traddr": "10.0.0.2", 00:30:50.772 "adrfam": "ipv4", 00:30:50.772 "trsvcid": "4420", 00:30:50.772 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:50.772 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:50.772 "hdgst": false, 00:30:50.772 "ddgst": false 00:30:50.772 }, 00:30:50.772 "method": "bdev_nvme_attach_controller" 00:30:50.772 },{ 00:30:50.772 "params": { 00:30:50.772 "name": "Nvme9", 00:30:50.772 "trtype": "tcp", 00:30:50.772 "traddr": "10.0.0.2", 00:30:50.772 "adrfam": "ipv4", 00:30:50.772 "trsvcid": "4420", 00:30:50.772 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:50.772 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:50.772 "hdgst": false, 00:30:50.772 "ddgst": false 00:30:50.772 }, 00:30:50.772 "method": "bdev_nvme_attach_controller" 00:30:50.772 },{ 00:30:50.772 "params": { 00:30:50.772 "name": "Nvme10", 00:30:50.772 "trtype": "tcp", 00:30:50.772 "traddr": "10.0.0.2", 00:30:50.772 "adrfam": "ipv4", 00:30:50.772 "trsvcid": "4420", 00:30:50.772 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:50.772 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:50.772 "hdgst": false, 00:30:50.772 "ddgst": false 00:30:50.772 }, 00:30:50.772 "method": "bdev_nvme_attach_controller" 00:30:50.772 }' 00:30:51.034 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.034 [2024-07-22 19:35:09.794555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.034 [2024-07-22 19:35:09.971967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.948 Running I/O for 1 seconds... 00:30:53.888 00:30:53.888 Latency(us) 00:30:53.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.888 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme1n1 : 1.01 190.61 11.91 0.00 0.00 331852.52 22282.24 267386.88 00:30:53.888 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme2n1 : 1.03 186.70 11.67 0.00 0.00 331861.05 22063.79 272629.76 00:30:53.888 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme3n1 : 1.11 229.62 14.35 0.00 0.00 265368.11 18459.31 265639.25 00:30:53.888 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme4n1 : 1.19 214.74 13.42 0.00 0.00 279385.39 20643.84 274377.39 00:30:53.888 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme5n1 : 1.14 223.96 14.00 0.00 0.00 262084.05 20971.52 269134.51 00:30:53.888 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme6n1 : 1.16 224.42 14.03 0.00 0.00 256106.39 4450.99 262144.00 00:30:53.888 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme7n1 : 1.15 221.81 13.86 0.00 0.00 254448.43 20643.84 248162.99 00:30:53.888 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme8n1 : 1.20 266.86 16.68 0.00 0.00 208084.82 16602.45 269134.51 00:30:53.888 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme9n1 : 1.21 317.04 19.82 0.00 0.00 172270.79 9065.81 262144.00 00:30:53.888 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.888 Verification LBA range: start 0x0 length 0x400 00:30:53.888 Nvme10n1 : 1.20 212.95 13.31 0.00 0.00 250903.04 18896.21 291853.65 00:30:53.888 =================================================================================================================== 00:30:53.888 Total : 2288.72 143.05 0.00 0.00 252161.41 4450.99 291853.65 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:54.830 rmmod nvme_tcp 00:30:54.830 rmmod nvme_fabrics 00:30:54.830 rmmod nvme_keyring 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3047791 ']' 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3047791 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3047791 ']' 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3047791 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3047791 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3047791' 00:30:54.830 killing process with pid 3047791 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3047791 00:30:54.830 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3047791 00:30:56.285 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:56.286 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:56.286 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:56.286 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:56.286 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:56.286 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.286 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.286 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:58.828 00:30:58.828 real 0m20.192s 00:30:58.828 user 0m48.877s 00:30:58.828 sys 0m6.944s 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:58.828 ************************************ 00:30:58.828 END TEST nvmf_shutdown_tc1 00:30:58.828 ************************************ 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:58.828 ************************************ 00:30:58.828 START TEST nvmf_shutdown_tc2 00:30:58.828 ************************************ 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:58.828 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:58.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:58.829 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:58.829 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:58.829 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:58.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:30:58.829 00:30:58.829 --- 10.0.0.2 ping statistics --- 00:30:58.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.829 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:30:58.829 00:30:58.829 --- 10.0.0.1 ping statistics --- 00:30:58.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.829 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3050476 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3050476 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3050476 ']' 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:58.829 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:59.090 [2024-07-22 19:35:17.790836] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:59.090 [2024-07-22 19:35:17.790931] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.090 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.090 [2024-07-22 19:35:17.901238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:59.090 [2024-07-22 19:35:18.041409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.090 [2024-07-22 19:35:18.041444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.090 [2024-07-22 19:35:18.041454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.090 [2024-07-22 19:35:18.041464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.090 [2024-07-22 19:35:18.041473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.090 [2024-07-22 19:35:18.041593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.090 [2024-07-22 19:35:18.041733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:59.090 [2024-07-22 19:35:18.041827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.090 [2024-07-22 19:35:18.041854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:59.665 [2024-07-22 19:35:18.569964] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.665 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.927 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.927 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.927 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.927 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.927 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:59.927 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:59.927 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:59.927 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.927 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:59.927 Malloc1 00:30:59.927 [2024-07-22 19:35:18.698390] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.927 Malloc2 00:30:59.927 Malloc3 00:30:59.927 Malloc4 00:31:00.187 Malloc5 00:31:00.187 Malloc6 00:31:00.187 Malloc7 00:31:00.187 Malloc8 00:31:00.447 Malloc9 00:31:00.447 Malloc10 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3050860 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3050860 /var/tmp/bdevperf.sock 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3050860 ']' 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:00.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.447 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.447 { 00:31:00.447 "params": { 00:31:00.447 "name": "Nvme$subsystem", 00:31:00.447 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.448 { 00:31:00.448 "params": { 00:31:00.448 "name": "Nvme$subsystem", 00:31:00.448 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.448 { 00:31:00.448 "params": { 00:31:00.448 "name": "Nvme$subsystem", 00:31:00.448 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.448 { 00:31:00.448 "params": { 00:31:00.448 "name": "Nvme$subsystem", 00:31:00.448 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.448 { 00:31:00.448 "params": { 00:31:00.448 "name": "Nvme$subsystem", 00:31:00.448 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.448 { 00:31:00.448 "params": { 00:31:00.448 "name": "Nvme$subsystem", 00:31:00.448 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.448 { 00:31:00.448 "params": { 00:31:00.448 "name": "Nvme$subsystem", 00:31:00.448 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.448 { 00:31:00.448 "params": { 00:31:00.448 "name": "Nvme$subsystem", 00:31:00.448 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.448 { 00:31:00.448 "params": { 00:31:00.448 "name": "Nvme$subsystem", 00:31:00.448 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.448 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.448 { 00:31:00.448 "params": { 00:31:00.448 "name": "Nvme$subsystem", 00:31:00.448 "trtype": "$TEST_TRANSPORT", 00:31:00.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.448 "adrfam": "ipv4", 00:31:00.448 "trsvcid": "$NVMF_PORT", 00:31:00.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.448 "hdgst": ${hdgst:-false}, 00:31:00.448 "ddgst": ${ddgst:-false} 00:31:00.448 }, 00:31:00.448 "method": "bdev_nvme_attach_controller" 00:31:00.448 } 00:31:00.448 EOF 00:31:00.448 )") 00:31:00.709 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:00.709 [2024-07-22 19:35:19.402207] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:00.709 [2024-07-22 19:35:19.402314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050860 ] 00:31:00.709 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:31:00.709 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:31:00.709 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:00.709 "params": { 00:31:00.709 "name": "Nvme1", 00:31:00.709 "trtype": "tcp", 00:31:00.709 "traddr": "10.0.0.2", 00:31:00.709 "adrfam": "ipv4", 00:31:00.709 "trsvcid": "4420", 00:31:00.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:00.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:00.709 "hdgst": false, 00:31:00.709 "ddgst": false 00:31:00.709 }, 00:31:00.709 "method": "bdev_nvme_attach_controller" 00:31:00.709 },{ 00:31:00.709 "params": { 00:31:00.709 "name": "Nvme2", 00:31:00.709 "trtype": "tcp", 00:31:00.709 "traddr": "10.0.0.2", 00:31:00.709 "adrfam": "ipv4", 00:31:00.709 "trsvcid": "4420", 00:31:00.709 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:00.709 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:00.709 "hdgst": false, 00:31:00.709 "ddgst": false 00:31:00.709 }, 00:31:00.709 "method": "bdev_nvme_attach_controller" 00:31:00.709 },{ 00:31:00.709 "params": { 00:31:00.709 "name": "Nvme3", 00:31:00.709 "trtype": "tcp", 00:31:00.709 "traddr": "10.0.0.2", 00:31:00.709 "adrfam": "ipv4", 00:31:00.709 "trsvcid": "4420", 00:31:00.709 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:00.709 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:00.709 "hdgst": false, 00:31:00.709 "ddgst": false 00:31:00.709 }, 00:31:00.709 "method": "bdev_nvme_attach_controller" 00:31:00.709 },{ 00:31:00.709 "params": { 00:31:00.709 "name": "Nvme4", 00:31:00.709 "trtype": "tcp", 00:31:00.709 "traddr": "10.0.0.2", 00:31:00.709 "adrfam": "ipv4", 00:31:00.709 "trsvcid": "4420", 00:31:00.709 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:00.709 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:00.709 "hdgst": false, 00:31:00.709 "ddgst": false 00:31:00.709 }, 00:31:00.709 "method": "bdev_nvme_attach_controller" 00:31:00.709 },{ 00:31:00.709 "params": { 00:31:00.709 "name": "Nvme5", 00:31:00.709 "trtype": "tcp", 00:31:00.709 "traddr": "10.0.0.2", 00:31:00.709 "adrfam": "ipv4", 00:31:00.709 "trsvcid": "4420", 00:31:00.709 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:00.709 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:00.709 "hdgst": false, 00:31:00.709 "ddgst": false 00:31:00.709 }, 00:31:00.709 "method": "bdev_nvme_attach_controller" 00:31:00.709 },{ 00:31:00.709 "params": { 00:31:00.709 "name": "Nvme6", 00:31:00.709 "trtype": "tcp", 00:31:00.709 "traddr": "10.0.0.2", 00:31:00.709 "adrfam": "ipv4", 00:31:00.709 "trsvcid": "4420", 00:31:00.709 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:00.709 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:00.709 "hdgst": false, 00:31:00.709 "ddgst": false 00:31:00.709 }, 00:31:00.709 "method": "bdev_nvme_attach_controller" 00:31:00.709 },{ 00:31:00.709 "params": { 00:31:00.709 "name": "Nvme7", 00:31:00.709 "trtype": "tcp", 00:31:00.709 "traddr": "10.0.0.2", 00:31:00.709 "adrfam": "ipv4", 00:31:00.709 "trsvcid": "4420", 00:31:00.709 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:00.709 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:00.709 "hdgst": false, 00:31:00.709 "ddgst": false 00:31:00.709 }, 00:31:00.709 "method": "bdev_nvme_attach_controller" 00:31:00.709 },{ 00:31:00.709 "params": { 00:31:00.709 "name": "Nvme8", 00:31:00.709 "trtype": "tcp", 00:31:00.709 "traddr": "10.0.0.2", 00:31:00.709 "adrfam": "ipv4", 00:31:00.709 "trsvcid": "4420", 00:31:00.709 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:00.709 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:00.709 "hdgst": false, 00:31:00.709 "ddgst": false 00:31:00.709 }, 00:31:00.710 "method": "bdev_nvme_attach_controller" 00:31:00.710 },{ 00:31:00.710 "params": { 00:31:00.710 "name": "Nvme9", 00:31:00.710 "trtype": "tcp", 00:31:00.710 "traddr": "10.0.0.2", 00:31:00.710 "adrfam": "ipv4", 00:31:00.710 "trsvcid": "4420", 00:31:00.710 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:00.710 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:00.710 "hdgst": false, 00:31:00.710 "ddgst": false 00:31:00.710 }, 00:31:00.710 "method": "bdev_nvme_attach_controller" 00:31:00.710 },{ 00:31:00.710 "params": { 00:31:00.710 "name": "Nvme10", 00:31:00.710 "trtype": "tcp", 00:31:00.710 "traddr": "10.0.0.2", 00:31:00.710 "adrfam": "ipv4", 00:31:00.710 "trsvcid": "4420", 00:31:00.710 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:00.710 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:00.710 "hdgst": false, 00:31:00.710 "ddgst": false 00:31:00.710 }, 00:31:00.710 "method": "bdev_nvme_attach_controller" 00:31:00.710 }' 00:31:00.710 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.710 [2024-07-22 19:35:19.514643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.970 [2024-07-22 19:35:19.691770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.881 Running I/O for 10 seconds... 00:31:02.881 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:02.881 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:31:03.143 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:31:03.404 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3050860 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3050860 ']' 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3050860 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3050860 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3050860' 00:31:03.405 killing process with pid 3050860 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3050860 00:31:03.405 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3050860 00:31:03.666 Received shutdown signal, test time was about 0.981240 seconds 00:31:03.666 00:31:03.666 Latency(us) 00:31:03.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.666 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme1n1 : 0.94 204.50 12.78 0.00 0.00 308748.52 41506.13 248162.99 00:31:03.666 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme2n1 : 0.98 261.14 16.32 0.00 0.00 235817.60 7809.71 267386.88 00:31:03.666 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme3n1 : 0.97 262.64 16.42 0.00 0.00 230317.23 19005.44 267386.88 00:31:03.666 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme4n1 : 0.97 263.49 16.47 0.00 0.00 224615.89 22500.69 262144.00 00:31:03.666 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme5n1 : 0.95 201.29 12.58 0.00 0.00 286951.82 18459.31 265639.25 00:31:03.666 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme6n1 : 0.95 202.19 12.64 0.00 0.00 278739.06 23374.51 263891.63 00:31:03.666 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme7n1 : 0.94 205.05 12.82 0.00 0.00 267083.95 23920.64 267386.88 00:31:03.666 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme8n1 : 0.93 205.41 12.84 0.00 0.00 260807.68 18896.21 269134.51 00:31:03.666 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme9n1 : 0.96 199.99 12.50 0.00 0.00 262649.74 34297.17 274377.39 00:31:03.666 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.666 Verification LBA range: start 0x0 length 0x400 00:31:03.666 Nvme10n1 : 0.96 198.98 12.44 0.00 0.00 257612.23 21517.65 288358.40 00:31:03.666 =================================================================================================================== 00:31:03.666 Total : 2204.68 137.79 0.00 0.00 258508.54 7809.71 288358.40 00:31:04.237 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3050476 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:05.622 rmmod nvme_tcp 00:31:05.622 rmmod nvme_fabrics 00:31:05.622 rmmod nvme_keyring 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3050476 ']' 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3050476 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3050476 ']' 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3050476 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3050476 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:05.622 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3050476' 00:31:05.622 killing process with pid 3050476 00:31:05.623 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3050476 00:31:05.623 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3050476 00:31:07.006 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:07.006 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:07.007 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:07.007 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:07.007 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:07.007 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.007 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.007 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.920 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:09.182 00:31:09.182 real 0m10.512s 00:31:09.182 user 0m34.168s 00:31:09.182 sys 0m1.464s 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.182 ************************************ 00:31:09.182 END TEST nvmf_shutdown_tc2 00:31:09.182 ************************************ 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:09.182 ************************************ 00:31:09.182 START TEST nvmf_shutdown_tc3 00:31:09.182 ************************************ 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.182 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:09.183 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:09.183 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:09.183 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:09.183 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:09.183 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.183 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:09.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:31:09.445 00:31:09.445 --- 10.0.0.2 ping statistics --- 00:31:09.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.445 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:31:09.445 00:31:09.445 --- 10.0.0.1 ping statistics --- 00:31:09.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.445 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3052661 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3052661 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3052661 ']' 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:09.445 19:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:09.445 [2024-07-22 19:35:28.397268] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:09.445 [2024-07-22 19:35:28.397362] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.706 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.706 [2024-07-22 19:35:28.516234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:09.706 [2024-07-22 19:35:28.654431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.706 [2024-07-22 19:35:28.654469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.706 [2024-07-22 19:35:28.654478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.706 [2024-07-22 19:35:28.654485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.706 [2024-07-22 19:35:28.654492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.706 [2024-07-22 19:35:28.654614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:09.706 [2024-07-22 19:35:28.654756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:09.706 [2024-07-22 19:35:28.654896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.706 [2024-07-22 19:35:28.654923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:10.279 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:10.279 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:31:10.279 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:10.279 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:10.279 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:10.279 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.279 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.279 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.279 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:10.279 [2024-07-22 19:35:29.225594] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.540 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:10.540 Malloc1 00:31:10.540 [2024-07-22 19:35:29.353812] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.540 Malloc2 00:31:10.540 Malloc3 00:31:10.801 Malloc4 00:31:10.801 Malloc5 00:31:10.801 Malloc6 00:31:10.801 Malloc7 00:31:11.061 Malloc8 00:31:11.061 Malloc9 00:31:11.061 Malloc10 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3053038 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3053038 /var/tmp/bdevperf.sock 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3053038 ']' 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:11.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:31:11.061 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.062 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.062 { 00:31:11.062 "params": { 00:31:11.062 "name": "Nvme$subsystem", 00:31:11.062 "trtype": "$TEST_TRANSPORT", 00:31:11.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.062 "adrfam": "ipv4", 00:31:11.062 "trsvcid": "$NVMF_PORT", 00:31:11.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.062 "hdgst": ${hdgst:-false}, 00:31:11.062 "ddgst": ${ddgst:-false} 00:31:11.062 }, 00:31:11.062 "method": "bdev_nvme_attach_controller" 00:31:11.062 } 00:31:11.062 EOF 00:31:11.062 )") 00:31:11.062 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.062 19:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.062 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.062 { 00:31:11.062 "params": { 00:31:11.062 "name": "Nvme$subsystem", 00:31:11.062 "trtype": "$TEST_TRANSPORT", 00:31:11.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.062 "adrfam": "ipv4", 00:31:11.062 "trsvcid": "$NVMF_PORT", 00:31:11.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.062 "hdgst": ${hdgst:-false}, 00:31:11.062 "ddgst": ${ddgst:-false} 00:31:11.062 }, 00:31:11.062 "method": "bdev_nvme_attach_controller" 00:31:11.062 } 00:31:11.062 EOF 00:31:11.062 )") 00:31:11.062 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.062 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.062 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.062 { 00:31:11.062 "params": { 00:31:11.062 "name": "Nvme$subsystem", 00:31:11.062 "trtype": "$TEST_TRANSPORT", 00:31:11.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.062 "adrfam": "ipv4", 00:31:11.062 "trsvcid": "$NVMF_PORT", 00:31:11.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.062 "hdgst": ${hdgst:-false}, 00:31:11.062 "ddgst": ${ddgst:-false} 00:31:11.062 }, 00:31:11.062 "method": "bdev_nvme_attach_controller" 00:31:11.062 } 00:31:11.062 EOF 00:31:11.062 )") 00:31:11.062 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.323 { 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme$subsystem", 00:31:11.323 "trtype": "$TEST_TRANSPORT", 00:31:11.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.323 "adrfam": "ipv4", 00:31:11.323 "trsvcid": "$NVMF_PORT", 00:31:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.323 "hdgst": ${hdgst:-false}, 00:31:11.323 "ddgst": ${ddgst:-false} 00:31:11.323 }, 00:31:11.323 "method": "bdev_nvme_attach_controller" 00:31:11.323 } 00:31:11.323 EOF 00:31:11.323 )") 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.323 { 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme$subsystem", 00:31:11.323 "trtype": "$TEST_TRANSPORT", 00:31:11.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.323 "adrfam": "ipv4", 00:31:11.323 "trsvcid": "$NVMF_PORT", 00:31:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.323 "hdgst": ${hdgst:-false}, 00:31:11.323 "ddgst": ${ddgst:-false} 00:31:11.323 }, 00:31:11.323 "method": "bdev_nvme_attach_controller" 00:31:11.323 } 00:31:11.323 EOF 00:31:11.323 )") 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.323 { 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme$subsystem", 00:31:11.323 "trtype": "$TEST_TRANSPORT", 00:31:11.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.323 "adrfam": "ipv4", 00:31:11.323 "trsvcid": "$NVMF_PORT", 00:31:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.323 "hdgst": ${hdgst:-false}, 00:31:11.323 "ddgst": ${ddgst:-false} 00:31:11.323 }, 00:31:11.323 "method": "bdev_nvme_attach_controller" 00:31:11.323 } 00:31:11.323 EOF 00:31:11.323 )") 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.323 { 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme$subsystem", 00:31:11.323 "trtype": "$TEST_TRANSPORT", 00:31:11.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.323 "adrfam": "ipv4", 00:31:11.323 "trsvcid": "$NVMF_PORT", 00:31:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.323 "hdgst": ${hdgst:-false}, 00:31:11.323 "ddgst": ${ddgst:-false} 00:31:11.323 }, 00:31:11.323 "method": "bdev_nvme_attach_controller" 00:31:11.323 } 00:31:11.323 EOF 00:31:11.323 )") 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.323 { 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme$subsystem", 00:31:11.323 "trtype": "$TEST_TRANSPORT", 00:31:11.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.323 "adrfam": "ipv4", 00:31:11.323 "trsvcid": "$NVMF_PORT", 00:31:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.323 "hdgst": ${hdgst:-false}, 00:31:11.323 "ddgst": ${ddgst:-false} 00:31:11.323 }, 00:31:11.323 "method": "bdev_nvme_attach_controller" 00:31:11.323 } 00:31:11.323 EOF 00:31:11.323 )") 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.323 { 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme$subsystem", 00:31:11.323 "trtype": "$TEST_TRANSPORT", 00:31:11.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.323 "adrfam": "ipv4", 00:31:11.323 "trsvcid": "$NVMF_PORT", 00:31:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.323 "hdgst": ${hdgst:-false}, 00:31:11.323 "ddgst": ${ddgst:-false} 00:31:11.323 }, 00:31:11.323 "method": "bdev_nvme_attach_controller" 00:31:11.323 } 00:31:11.323 EOF 00:31:11.323 )") 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.323 { 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme$subsystem", 00:31:11.323 "trtype": "$TEST_TRANSPORT", 00:31:11.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.323 "adrfam": "ipv4", 00:31:11.323 "trsvcid": "$NVMF_PORT", 00:31:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.323 "hdgst": ${hdgst:-false}, 00:31:11.323 "ddgst": ${ddgst:-false} 00:31:11.323 }, 00:31:11.323 "method": "bdev_nvme_attach_controller" 00:31:11.323 } 00:31:11.323 EOF 00:31:11.323 )") 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:11.323 [2024-07-22 19:35:30.067943] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:11.323 [2024-07-22 19:35:30.068046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053038 ] 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:31:11.323 19:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme1", 00:31:11.323 "trtype": "tcp", 00:31:11.323 "traddr": "10.0.0.2", 00:31:11.323 "adrfam": "ipv4", 00:31:11.323 "trsvcid": "4420", 00:31:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:11.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:11.323 "hdgst": false, 00:31:11.323 "ddgst": false 00:31:11.323 }, 00:31:11.323 "method": "bdev_nvme_attach_controller" 00:31:11.323 },{ 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme2", 00:31:11.323 "trtype": "tcp", 00:31:11.323 "traddr": "10.0.0.2", 00:31:11.323 "adrfam": "ipv4", 00:31:11.323 "trsvcid": "4420", 00:31:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:11.323 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:11.323 "hdgst": false, 00:31:11.323 "ddgst": false 00:31:11.323 }, 00:31:11.323 "method": "bdev_nvme_attach_controller" 00:31:11.323 },{ 00:31:11.323 "params": { 00:31:11.323 "name": "Nvme3", 00:31:11.323 "trtype": "tcp", 00:31:11.323 "traddr": "10.0.0.2", 00:31:11.324 "adrfam": "ipv4", 00:31:11.324 "trsvcid": "4420", 00:31:11.324 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:11.324 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:11.324 "hdgst": false, 00:31:11.324 "ddgst": false 00:31:11.324 }, 00:31:11.324 "method": "bdev_nvme_attach_controller" 00:31:11.324 },{ 00:31:11.324 "params": { 00:31:11.324 "name": "Nvme4", 00:31:11.324 "trtype": "tcp", 00:31:11.324 "traddr": "10.0.0.2", 00:31:11.324 "adrfam": "ipv4", 00:31:11.324 "trsvcid": "4420", 00:31:11.324 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:11.324 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:11.324 "hdgst": false, 00:31:11.324 "ddgst": false 00:31:11.324 }, 00:31:11.324 "method": "bdev_nvme_attach_controller" 00:31:11.324 },{ 00:31:11.324 "params": { 00:31:11.324 "name": "Nvme5", 00:31:11.324 "trtype": "tcp", 00:31:11.324 "traddr": "10.0.0.2", 00:31:11.324 "adrfam": "ipv4", 00:31:11.324 "trsvcid": "4420", 00:31:11.324 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:11.324 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:11.324 "hdgst": false, 00:31:11.324 "ddgst": false 00:31:11.324 }, 00:31:11.324 "method": "bdev_nvme_attach_controller" 00:31:11.324 },{ 00:31:11.324 "params": { 00:31:11.324 "name": "Nvme6", 00:31:11.324 "trtype": "tcp", 00:31:11.324 "traddr": "10.0.0.2", 00:31:11.324 "adrfam": "ipv4", 00:31:11.324 "trsvcid": "4420", 00:31:11.324 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:11.324 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:11.324 "hdgst": false, 00:31:11.324 "ddgst": false 00:31:11.324 }, 00:31:11.324 "method": "bdev_nvme_attach_controller" 00:31:11.324 },{ 00:31:11.324 "params": { 00:31:11.324 "name": "Nvme7", 00:31:11.324 "trtype": "tcp", 00:31:11.324 "traddr": "10.0.0.2", 00:31:11.324 "adrfam": "ipv4", 00:31:11.324 "trsvcid": "4420", 00:31:11.324 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:11.324 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:11.324 "hdgst": false, 00:31:11.324 "ddgst": false 00:31:11.324 }, 00:31:11.324 "method": "bdev_nvme_attach_controller" 00:31:11.324 },{ 00:31:11.324 "params": { 00:31:11.324 "name": "Nvme8", 00:31:11.324 "trtype": "tcp", 00:31:11.324 "traddr": "10.0.0.2", 00:31:11.324 "adrfam": "ipv4", 00:31:11.324 "trsvcid": "4420", 00:31:11.324 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:11.324 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:11.324 "hdgst": false, 00:31:11.324 "ddgst": false 00:31:11.324 }, 00:31:11.324 "method": "bdev_nvme_attach_controller" 00:31:11.324 },{ 00:31:11.324 "params": { 00:31:11.324 "name": "Nvme9", 00:31:11.324 "trtype": "tcp", 00:31:11.324 "traddr": "10.0.0.2", 00:31:11.324 "adrfam": "ipv4", 00:31:11.324 "trsvcid": "4420", 00:31:11.324 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:11.324 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:11.324 "hdgst": false, 00:31:11.324 "ddgst": false 00:31:11.324 }, 00:31:11.324 "method": "bdev_nvme_attach_controller" 00:31:11.324 },{ 00:31:11.324 "params": { 00:31:11.324 "name": "Nvme10", 00:31:11.324 "trtype": "tcp", 00:31:11.324 "traddr": "10.0.0.2", 00:31:11.324 "adrfam": "ipv4", 00:31:11.324 "trsvcid": "4420", 00:31:11.324 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:11.324 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:11.324 "hdgst": false, 00:31:11.324 "ddgst": false 00:31:11.324 }, 00:31:11.324 "method": "bdev_nvme_attach_controller" 00:31:11.324 }' 00:31:11.324 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.324 [2024-07-22 19:35:30.180071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.585 [2024-07-22 19:35:30.357888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.501 Running I/O for 10 seconds... 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:31:13.762 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3052661 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3052661 ']' 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3052661 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3052661 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3052661' 00:31:14.029 killing process with pid 3052661 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3052661 00:31:14.029 19:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3052661 00:31:14.029 [2024-07-22 19:35:32.956816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.956996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.029 [2024-07-22 19:35:32.957002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.957259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.959024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.959057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.960976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.960997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.030 [2024-07-22 19:35:32.961245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:14.031 [2024-07-22 19:35:32.961462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.961986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.961998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.962011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.962022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.962035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.962045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.962057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.962068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.962081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.962091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.962104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.962114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.962127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.962138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.031 [2024-07-22 19:35:32.962150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.031 [2024-07-22 19:35:32.962161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.962986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.962997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.963009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.963020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.963032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.032 [2024-07-22 19:35:32.963043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.032 [2024-07-22 19:35:32.963520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.032 [2024-07-22 19:35:32.963547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.963953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.965355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.965386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.965394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.965773] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000392e00 was disconnected and freed. reset controller. 00:31:14.033 [2024-07-22 19:35:32.965915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.033 [2024-07-22 19:35:32.965937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.033 [2024-07-22 19:35:32.965952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.033 [2024-07-22 19:35:32.965963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.033 [2024-07-22 19:35:32.965975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.033 [2024-07-22 19:35:32.965986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.033 [2024-07-22 19:35:32.965997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.033 [2024-07-22 19:35:32.966008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.033 [2024-07-22 19:35:32.966019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389a80 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.966018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.033 [2024-07-22 19:35:32.966038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.034 [2024-07-22 19:35:32.966064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.034 [2024-07-22 19:35:32.966078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.034 [2024-07-22 19:35:32.966091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-22 19:35:32.966098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:31:14.034 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.034 [2024-07-22 19:35:32.966114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.034 [2024-07-22 19:35:32.966121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.034 [2024-07-22 19:35:32.966135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.034 [2024-07-22 19:35:32.966149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-22 19:35:32.966155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.034 with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038a480 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.034 [2024-07-22 19:35:32.966208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.034 [2024-07-22 19:35:32.966222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.034 [2024-07-22 19:35:32.966236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.034 [2024-07-22 19:35:32.966251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-22 19:35:32.966256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nswith the state(5) to be set 00:31:14.034 id:0 cdw10:00000000 cdw11:00000000 00:31:14.034 [2024-07-22 19:35:32.966266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.034 [2024-07-22 19:35:32.966273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.034 [2024-07-22 19:35:32.966288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.034 [2024-07-22 19:35:32.966295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-22 19:35:32.966342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same id:0 cdw10:00000000 cdw11:00000000 00:31:14.034 with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.034 [2024-07-22 19:35:32.966357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.034 [2024-07-22 19:35:32.966372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.034 [2024-07-22 19:35:32.966377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-22 19:35:32.966379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.035 [2024-07-22 19:35:32.966394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.035 [2024-07-22 19:35:32.966421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038e080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-22 19:35:32.966467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same id:0 cdw10:00000000 cdw11:00000000 00:31:14.035 with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.035 [2024-07-22 19:35:32.966475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.035 [2024-07-22 19:35:32.966524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.035 [2024-07-22 19:35:32.966546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.966584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.035 [2024-07-22 19:35:32.966597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.035 [2024-07-22 19:35:32.966619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.035 [2024-07-22 19:35:32.966641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.035 [2024-07-22 19:35:32.966663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.035 [2024-07-22 19:35:32.966674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038ae80 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.967994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.035 [2024-07-22 19:35:32.968193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.968961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:14.036 [2024-07-22 19:35:32.969018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038e080 (9): Bad file descriptor 00:31:14.036 [2024-07-22 19:35:32.969060] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:14.036 [2024-07-22 19:35:32.970929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.036 [2024-07-22 19:35:32.970959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.036 [2024-07-22 19:35:32.970980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.036 [2024-07-22 19:35:32.970991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.036 [2024-07-22 19:35:32.971005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.036 [2024-07-22 19:35:32.971016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.036 [2024-07-22 19:35:32.971030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.036 [2024-07-22 19:35:32.971041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.036 [2024-07-22 19:35:32.971054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.036 [2024-07-22 19:35:32.971065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.036 [2024-07-22 19:35:32.971078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.036 [2024-07-22 19:35:32.971089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.036 [2024-07-22 19:35:32.971101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.036 [2024-07-22 19:35:32.971112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.036 [2024-07-22 19:35:32.971124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.036 [2024-07-22 19:35:32.971135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.036 [2024-07-22 19:35:32.971146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000391000 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971363] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000391000 was disconnected and freed. reset controller. 00:31:14.036 [2024-07-22 19:35:32.971372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.036 [2024-07-22 19:35:32.971521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-22 19:35:32.971592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:31:14.037 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:14.037 [2024-07-22 19:35:32.971623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.971980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.971993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.037 [2024-07-22 19:35:32.972320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.037 [2024-07-22 19:35:32.972339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:14.038 [2024-07-22 19:35:32.972409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:14.038 [2024-07-22 19:35:32.972421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.972976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.972987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.973000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.973011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.973023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.973034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.973051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.973061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.973075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.973086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.038 [2024-07-22 19:35:32.973308] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000391f00 was disconnected and freed. reset controller. 00:31:14.038 [2024-07-22 19:35:32.973657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.038 [2024-07-22 19:35:32.973683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.973977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.973989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.039 [2024-07-22 19:35:32.974651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.039 [2024-07-22 19:35:32.974663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.040 [2024-07-22 19:35:32.974674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.040 [2024-07-22 19:35:32.974686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.040 [2024-07-22 19:35:32.974697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.040 [2024-07-22 19:35:32.974709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.040 [2024-07-22 19:35:32.974720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.040 [2024-07-22 19:35:32.974733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.040 [2024-07-22 19:35:32.974743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.040 [2024-07-22 19:35:32.974756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.040 [2024-07-22 19:35:32.974766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.040 [2024-07-22 19:35:32.974779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.040 [2024-07-22 19:35:32.974790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.040 [2024-07-22 19:35:32.974802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.040 [2024-07-22 19:35:32.974813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.040 [2024-07-22 19:35:32.974825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.040 [2024-07-22 19:35:32.974837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.985929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.985972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.985990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.310 [2024-07-22 19:35:32.986361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.986429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:14.310 [2024-07-22 19:35:32.986643] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000392900 was disconnected and freed. reset controller. 00:31:14.310 [2024-07-22 19:35:32.987650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.310 [2024-07-22 19:35:32.987697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038e080 with addr=10.0.0.2, port=4420 00:31:14.310 [2024-07-22 19:35:32.987716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038e080 is same with the state(5) to be set 00:31:14.310 [2024-07-22 19:35:32.987789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.310 [2024-07-22 19:35:32.987808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.987823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.310 [2024-07-22 19:35:32.987835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.987847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.310 [2024-07-22 19:35:32.987858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.987870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.310 [2024-07-22 19:35:32.987881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.310 [2024-07-22 19:35:32.987891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038d680 is same with the state(5) to be set 00:31:14.310 [2024-07-22 19:35:32.987923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.310 [2024-07-22 19:35:32.987936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.987948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.987959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.987975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.987987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.987999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.988010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.988021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038cc80 is same with the state(5) to be set 00:31:14.311 [2024-07-22 19:35:32.988062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.988076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.988089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.988100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.988112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.988124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.988135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.988146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.988156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038c280 is same with the state(5) to be set 00:31:14.311 [2024-07-22 19:35:32.988175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389a80 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.988210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038a480 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.988237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389080 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.988280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.988294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.988307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.988317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.988329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.988340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.988351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.311 [2024-07-22 19:35:32.988363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.988377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038b880 is same with the state(5) to be set 00:31:14.311 [2024-07-22 19:35:32.988400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.988420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038ae80 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.988441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038e080 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.992284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:14.311 [2024-07-22 19:35:32.992322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:14.311 [2024-07-22 19:35:32.992340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038c280 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.992427] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:14.311 [2024-07-22 19:35:32.992478] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:14.311 [2024-07-22 19:35:32.992834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:31:14.311 [2024-07-22 19:35:32.992864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d680 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.993155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.311 [2024-07-22 19:35:32.993176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038a480 with addr=10.0.0.2, port=4420 00:31:14.311 [2024-07-22 19:35:32.993187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038a480 is same with the state(5) to be set 00:31:14.311 [2024-07-22 19:35:32.993223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:14.311 [2024-07-22 19:35:32.993236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:14.311 [2024-07-22 19:35:32.993249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:14.311 [2024-07-22 19:35:32.993767] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:14.311 [2024-07-22 19:35:32.993822] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:14.311 [2024-07-22 19:35:32.994722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.311 [2024-07-22 19:35:32.995163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.311 [2024-07-22 19:35:32.995184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038c280 with addr=10.0.0.2, port=4420 00:31:14.311 [2024-07-22 19:35:32.995195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038c280 is same with the state(5) to be set 00:31:14.311 [2024-07-22 19:35:32.995226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038a480 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.995350] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:14.311 [2024-07-22 19:35:32.995732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.311 [2024-07-22 19:35:32.995750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038d680 with addr=10.0.0.2, port=4420 00:31:14.311 [2024-07-22 19:35:32.995761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038d680 is same with the state(5) to be set 00:31:14.311 [2024-07-22 19:35:32.995775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038c280 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.995787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:14.311 [2024-07-22 19:35:32.995796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:14.311 [2024-07-22 19:35:32.995809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:14.311 [2024-07-22 19:35:32.995893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.311 [2024-07-22 19:35:32.995909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d680 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.995921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:14.311 [2024-07-22 19:35:32.995930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:14.311 [2024-07-22 19:35:32.995941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:14.311 [2024-07-22 19:35:32.995991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.311 [2024-07-22 19:35:32.996001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:31:14.311 [2024-07-22 19:35:32.996011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:31:14.311 [2024-07-22 19:35:32.996020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:31:14.311 [2024-07-22 19:35:32.996070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.311 [2024-07-22 19:35:32.997168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038cc80 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.997218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038b880 (9): Bad file descriptor 00:31:14.311 [2024-07-22 19:35:32.997346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.311 [2024-07-22 19:35:32.997364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.997385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.311 [2024-07-22 19:35:32.997396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.997410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.311 [2024-07-22 19:35:32.997422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.997435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.311 [2024-07-22 19:35:32.997446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.997460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.311 [2024-07-22 19:35:32.997471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.311 [2024-07-22 19:35:32.997485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.311 [2024-07-22 19:35:32.997497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.997978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.997989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.312 [2024-07-22 19:35:32.998432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.312 [2024-07-22 19:35:32.998446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:32.998933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:32.998944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000390100 is same with the state(5) to be set 00:31:14.313 [2024-07-22 19:35:33.000456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.313 [2024-07-22 19:35:33.000856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.313 [2024-07-22 19:35:33.000867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.000880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.000891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.000905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.000916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.000929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.000940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.000952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.000963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.000976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.000987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.314 [2024-07-22 19:35:33.001713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.314 [2024-07-22 19:35:33.001730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.001981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.001992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.002005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.002016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.002027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000390600 is same with the state(5) to be set 00:31:14.315 [2024-07-22 19:35:33.003525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.003988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.003999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.004012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.004024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.004036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.004047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.004060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.004071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.315 [2024-07-22 19:35:33.004084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.315 [2024-07-22 19:35:33.004095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.004988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.004998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.005012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.316 [2024-07-22 19:35:33.005023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.316 [2024-07-22 19:35:33.005037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.005048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.005060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.005072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.005083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000390b00 is same with the state(5) to be set 00:31:14.317 [2024-07-22 19:35:33.006578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.006988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.006998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.317 [2024-07-22 19:35:33.007441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.317 [2024-07-22 19:35:33.007455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.007986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.007996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.008008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000391500 is same with the state(5) to be set 00:31:14.318 [2024-07-22 19:35:33.009469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:14.318 [2024-07-22 19:35:33.009491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.318 [2024-07-22 19:35:33.009510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:14.318 [2024-07-22 19:35:33.009523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:14.318 [2024-07-22 19:35:33.009635] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.318 [2024-07-22 19:35:33.009725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:31:14.318 [2024-07-22 19:35:33.010133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.318 [2024-07-22 19:35:33.010152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038e080 with addr=10.0.0.2, port=4420 00:31:14.318 [2024-07-22 19:35:33.010164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038e080 is same with the state(5) to be set 00:31:14.318 [2024-07-22 19:35:33.010533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.318 [2024-07-22 19:35:33.010581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:31:14.318 [2024-07-22 19:35:33.010596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:31:14.318 [2024-07-22 19:35:33.011018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.318 [2024-07-22 19:35:33.011036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000389080 with addr=10.0.0.2, port=4420 00:31:14.318 [2024-07-22 19:35:33.011051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389080 is same with the state(5) to be set 00:31:14.318 [2024-07-22 19:35:33.011565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.318 [2024-07-22 19:35:33.011612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000389a80 with addr=10.0.0.2, port=4420 00:31:14.318 [2024-07-22 19:35:33.011628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389a80 is same with the state(5) to be set 00:31:14.318 [2024-07-22 19:35:33.013346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.013375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.013400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.013411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.013425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.013436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.013450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.013461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.013474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.013485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.013498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.013508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.013522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.318 [2024-07-22 19:35:33.013533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.318 [2024-07-22 19:35:33.013546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.013978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.013991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.319 [2024-07-22 19:35:33.014332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.319 [2024-07-22 19:35:33.014345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.014953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.014964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000391a00 is same with the state(5) to be set 00:31:14.320 [2024-07-22 19:35:33.016490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.320 [2024-07-22 19:35:33.016800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.320 [2024-07-22 19:35:33.016811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.016824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.016836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.016849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.016860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.016873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.016883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.016897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.016908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.016921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.016932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.016945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.016956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.016970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.016981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.016995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.321 [2024-07-22 19:35:33.017756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.321 [2024-07-22 19:35:33.017768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.322 [2024-07-22 19:35:33.017780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.322 [2024-07-22 19:35:33.017793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.322 [2024-07-22 19:35:33.017804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.322 [2024-07-22 19:35:33.017818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.322 [2024-07-22 19:35:33.017828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.322 [2024-07-22 19:35:33.017842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.322 [2024-07-22 19:35:33.017853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.322 [2024-07-22 19:35:33.017867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.322 [2024-07-22 19:35:33.017879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.322 [2024-07-22 19:35:33.017891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.322 [2024-07-22 19:35:33.017903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.322 [2024-07-22 19:35:33.017915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000392400 is same with the state(5) to be set 00:31:14.322 [2024-07-22 19:35:33.021722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:14.322 [2024-07-22 19:35:33.021759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:14.322 [2024-07-22 19:35:33.021771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:31:14.322 [2024-07-22 19:35:33.021786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:14.322 task offset: 16896 on job bdev=Nvme10n1 fails 00:31:14.322 00:31:14.322 Latency(us) 00:31:14.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.322 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme1n1 ended in about 0.89 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme1n1 : 0.89 144.32 9.02 72.16 0.00 291855.36 28180.48 298844.16 00:31:14.322 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme2n1 ended in about 0.89 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme2n1 : 0.89 143.82 8.99 71.91 0.00 286084.84 19442.35 269134.51 00:31:14.322 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme3n1 ended in about 0.89 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme3n1 : 0.89 143.33 8.96 71.67 0.00 280223.57 23811.41 270882.13 00:31:14.322 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme4n1 ended in about 0.88 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme4n1 : 0.88 219.11 13.69 9.13 0.00 251793.68 22063.79 269134.51 00:31:14.322 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme5n1 ended in about 0.90 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme5n1 : 0.90 149.57 9.35 64.74 0.00 267044.69 26105.17 267386.88 00:31:14.322 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme6n1 ended in about 0.90 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme6n1 : 0.90 141.77 8.86 70.88 0.00 263012.98 19988.48 272629.76 00:31:14.322 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme7n1 ended in about 0.88 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme7n1 : 0.88 145.84 9.12 72.92 0.00 247729.49 19660.80 256901.12 00:31:14.322 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme8n1 ended in about 0.91 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme8n1 : 0.91 149.04 9.32 62.93 0.00 249415.40 22063.79 270882.13 00:31:14.322 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme9n1 ended in about 0.88 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme9n1 : 0.88 145.63 9.10 72.82 0.00 234691.41 17585.49 274377.39 00:31:14.322 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.322 Job: Nvme10n1 ended in about 0.86 seconds with error 00:31:14.322 Verification LBA range: start 0x0 length 0x400 00:31:14.322 Nvme10n1 : 0.86 149.59 9.35 74.79 0.00 220226.13 7099.73 298844.16 00:31:14.322 =================================================================================================================== 00:31:14.322 Total : 1532.02 95.75 643.95 0.00 259176.99 7099.73 298844.16 00:31:14.322 [2024-07-22 19:35:33.089156] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:14.322 [2024-07-22 19:35:33.089218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:31:14.322 [2024-07-22 19:35:33.089594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.322 [2024-07-22 19:35:33.089618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038ae80 with addr=10.0.0.2, port=4420 00:31:14.322 [2024-07-22 19:35:33.089633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038ae80 is same with the state(5) to be set 00:31:14.322 [2024-07-22 19:35:33.089657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038e080 (9): Bad file descriptor 00:31:14.322 [2024-07-22 19:35:33.089675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:31:14.322 [2024-07-22 19:35:33.089689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389080 (9): Bad file descriptor 00:31:14.322 [2024-07-22 19:35:33.089702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389a80 (9): Bad file descriptor 00:31:14.322 [2024-07-22 19:35:33.089745] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.322 [2024-07-22 19:35:33.089764] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.322 [2024-07-22 19:35:33.089779] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.322 [2024-07-22 19:35:33.089793] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.322 [2024-07-22 19:35:33.089807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038ae80 (9): Bad file descriptor 00:31:14.322 [2024-07-22 19:35:33.090549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.322 [2024-07-22 19:35:33.090599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038a480 with addr=10.0.0.2, port=4420 00:31:14.322 [2024-07-22 19:35:33.090614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038a480 is same with the state(5) to be set 00:31:14.322 [2024-07-22 19:35:33.090874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.322 [2024-07-22 19:35:33.090892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038c280 with addr=10.0.0.2, port=4420 00:31:14.322 [2024-07-22 19:35:33.090903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038c280 is same with the state(5) to be set 00:31:14.322 [2024-07-22 19:35:33.091282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.322 [2024-07-22 19:35:33.091298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038d680 with addr=10.0.0.2, port=4420 00:31:14.322 [2024-07-22 19:35:33.091309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038d680 is same with the state(5) to be set 00:31:14.322 [2024-07-22 19:35:33.091679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.322 [2024-07-22 19:35:33.091695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038b880 with addr=10.0.0.2, port=4420 00:31:14.322 [2024-07-22 19:35:33.091705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038b880 is same with the state(5) to be set 00:31:14.322 [2024-07-22 19:35:33.092111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.322 [2024-07-22 19:35:33.092125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038cc80 with addr=10.0.0.2, port=4420 00:31:14.322 [2024-07-22 19:35:33.092135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038cc80 is same with the state(5) to be set 00:31:14.322 [2024-07-22 19:35:33.092149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:14.322 [2024-07-22 19:35:33.092159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:14.322 [2024-07-22 19:35:33.092172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:14.322 [2024-07-22 19:35:33.092196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.322 [2024-07-22 19:35:33.092209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.322 [2024-07-22 19:35:33.092224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.322 [2024-07-22 19:35:33.092239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:14.322 [2024-07-22 19:35:33.092248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:14.322 [2024-07-22 19:35:33.092258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:14.322 [2024-07-22 19:35:33.092273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:14.323 [2024-07-22 19:35:33.092281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:14.323 [2024-07-22 19:35:33.092291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:14.323 [2024-07-22 19:35:33.092327] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.323 [2024-07-22 19:35:33.092343] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.323 [2024-07-22 19:35:33.092356] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.323 [2024-07-22 19:35:33.092371] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.323 [2024-07-22 19:35:33.092384] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.323 [2024-07-22 19:35:33.093325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.323 [2024-07-22 19:35:33.093350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.323 [2024-07-22 19:35:33.093360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.323 [2024-07-22 19:35:33.093369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.323 [2024-07-22 19:35:33.093383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038a480 (9): Bad file descriptor 00:31:14.323 [2024-07-22 19:35:33.093399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038c280 (9): Bad file descriptor 00:31:14.323 [2024-07-22 19:35:33.093412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d680 (9): Bad file descriptor 00:31:14.323 [2024-07-22 19:35:33.093425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038b880 (9): Bad file descriptor 00:31:14.323 [2024-07-22 19:35:33.093437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038cc80 (9): Bad file descriptor 00:31:14.323 [2024-07-22 19:35:33.093448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:31:14.323 [2024-07-22 19:35:33.093458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:31:14.323 [2024-07-22 19:35:33.093468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:31:14.323 [2024-07-22 19:35:33.093555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.323 [2024-07-22 19:35:33.093568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:14.323 [2024-07-22 19:35:33.093579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:14.323 [2024-07-22 19:35:33.093588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:14.323 [2024-07-22 19:35:33.093603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:14.323 [2024-07-22 19:35:33.093612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:14.323 [2024-07-22 19:35:33.093624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:14.323 [2024-07-22 19:35:33.093638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:31:14.323 [2024-07-22 19:35:33.093647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:31:14.323 [2024-07-22 19:35:33.093655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:31:14.323 [2024-07-22 19:35:33.093669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:14.323 [2024-07-22 19:35:33.093677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:14.323 [2024-07-22 19:35:33.093687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:14.323 [2024-07-22 19:35:33.093700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:31:14.323 [2024-07-22 19:35:33.093709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:31:14.323 [2024-07-22 19:35:33.093718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:31:14.323 [2024-07-22 19:35:33.093773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.323 [2024-07-22 19:35:33.093784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.323 [2024-07-22 19:35:33.093792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.323 [2024-07-22 19:35:33.093801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.323 [2024-07-22 19:35:33.093809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.707 19:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:31:15.707 19:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3053038 00:31:16.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3053038) - No such process 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:16.649 rmmod nvme_tcp 00:31:16.649 rmmod nvme_fabrics 00:31:16.649 rmmod nvme_keyring 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.649 19:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:19.273 00:31:19.273 real 0m9.666s 00:31:19.273 user 0m26.314s 00:31:19.273 sys 0m1.511s 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:19.273 ************************************ 00:31:19.273 END TEST nvmf_shutdown_tc3 00:31:19.273 ************************************ 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:31:19.273 00:31:19.273 real 0m40.756s 00:31:19.273 user 1m49.501s 00:31:19.273 sys 0m10.183s 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:19.273 ************************************ 00:31:19.273 END TEST nvmf_shutdown 00:31:19.273 ************************************ 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:31:19.273 00:31:19.273 real 17m40.175s 00:31:19.273 user 47m16.410s 00:31:19.273 sys 3m56.341s 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.273 19:35:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:19.273 ************************************ 00:31:19.273 END TEST nvmf_target_extra 00:31:19.273 ************************************ 00:31:19.273 19:35:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:19.273 19:35:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:19.273 19:35:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:19.273 19:35:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.273 19:35:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:19.273 ************************************ 00:31:19.273 START TEST nvmf_host 00:31:19.273 ************************************ 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:19.273 * Looking for test storage... 00:31:19.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:19.273 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:19.274 19:35:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:19.274 19:35:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:19.274 19:35:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:31:19.274 19:35:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:31:19.274 19:35:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:19.274 19:35:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:19.274 19:35:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.274 19:35:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.274 ************************************ 00:31:19.274 START TEST nvmf_multicontroller 00:31:19.274 ************************************ 00:31:19.274 19:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:19.274 * Looking for test storage... 00:31:19.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:31:19.274 19:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:27.423 19:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:27.423 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:27.423 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:27.423 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:27.423 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:27.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:31:27.423 00:31:27.423 --- 10.0.0.2 ping statistics --- 00:31:27.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.423 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:31:27.423 00:31:27.423 --- 10.0.0.1 ping statistics --- 00:31:27.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.423 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:31:27.423 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3058141 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3058141 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3058141 ']' 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:27.424 19:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.424 [2024-07-22 19:35:45.454948] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:27.424 [2024-07-22 19:35:45.455075] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.424 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.424 [2024-07-22 19:35:45.609519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:27.424 [2024-07-22 19:35:45.838315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.424 [2024-07-22 19:35:45.838385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.424 [2024-07-22 19:35:45.838401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.424 [2024-07-22 19:35:45.838411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.424 [2024-07-22 19:35:45.838423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.424 [2024-07-22 19:35:45.838589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.424 [2024-07-22 19:35:45.838751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.424 [2024-07-22 19:35:45.838781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.424 [2024-07-22 19:35:46.230269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.424 Malloc0 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.424 [2024-07-22 19:35:46.335033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.424 [2024-07-22 19:35:46.346971] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.424 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.685 Malloc1 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3058485 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3058485 /var/tmp/bdevperf.sock 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3058485 ']' 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:27.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:27.685 19:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.629 NVMe0n1 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.629 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.629 1 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.630 request: 00:31:28.630 { 00:31:28.630 "name": "NVMe0", 00:31:28.630 "trtype": "tcp", 00:31:28.630 "traddr": "10.0.0.2", 00:31:28.630 "adrfam": "ipv4", 00:31:28.630 "trsvcid": "4420", 00:31:28.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.630 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:28.630 "hostaddr": "10.0.0.2", 00:31:28.630 "hostsvcid": "60000", 00:31:28.630 "prchk_reftag": false, 00:31:28.630 "prchk_guard": false, 00:31:28.630 "hdgst": false, 00:31:28.630 "ddgst": false, 00:31:28.630 "method": "bdev_nvme_attach_controller", 00:31:28.630 "req_id": 1 00:31:28.630 } 00:31:28.630 Got JSON-RPC error response 00:31:28.630 response: 00:31:28.630 { 00:31:28.630 "code": -114, 00:31:28.630 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:28.630 } 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.630 request: 00:31:28.630 { 00:31:28.630 "name": "NVMe0", 00:31:28.630 "trtype": "tcp", 00:31:28.630 "traddr": "10.0.0.2", 00:31:28.630 "adrfam": "ipv4", 00:31:28.630 "trsvcid": "4420", 00:31:28.630 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:28.630 "hostaddr": "10.0.0.2", 00:31:28.630 "hostsvcid": "60000", 00:31:28.630 "prchk_reftag": false, 00:31:28.630 "prchk_guard": false, 00:31:28.630 "hdgst": false, 00:31:28.630 "ddgst": false, 00:31:28.630 "method": "bdev_nvme_attach_controller", 00:31:28.630 "req_id": 1 00:31:28.630 } 00:31:28.630 Got JSON-RPC error response 00:31:28.630 response: 00:31:28.630 { 00:31:28.630 "code": -114, 00:31:28.630 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:28.630 } 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:28.630 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.891 request: 00:31:28.891 { 00:31:28.891 "name": "NVMe0", 00:31:28.891 "trtype": "tcp", 00:31:28.891 "traddr": "10.0.0.2", 00:31:28.891 "adrfam": "ipv4", 00:31:28.891 "trsvcid": "4420", 00:31:28.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.891 "hostaddr": "10.0.0.2", 00:31:28.891 "hostsvcid": "60000", 00:31:28.891 "prchk_reftag": false, 00:31:28.891 "prchk_guard": false, 00:31:28.891 "hdgst": false, 00:31:28.891 "ddgst": false, 00:31:28.891 "multipath": "disable", 00:31:28.891 "method": "bdev_nvme_attach_controller", 00:31:28.891 "req_id": 1 00:31:28.891 } 00:31:28.891 Got JSON-RPC error response 00:31:28.891 response: 00:31:28.891 { 00:31:28.891 "code": -114, 00:31:28.891 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:31:28.891 } 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:28.891 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.892 request: 00:31:28.892 { 00:31:28.892 "name": "NVMe0", 00:31:28.892 "trtype": "tcp", 00:31:28.892 "traddr": "10.0.0.2", 00:31:28.892 "adrfam": "ipv4", 00:31:28.892 "trsvcid": "4420", 00:31:28.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.892 "hostaddr": "10.0.0.2", 00:31:28.892 "hostsvcid": "60000", 00:31:28.892 "prchk_reftag": false, 00:31:28.892 "prchk_guard": false, 00:31:28.892 "hdgst": false, 00:31:28.892 "ddgst": false, 00:31:28.892 "multipath": "failover", 00:31:28.892 "method": "bdev_nvme_attach_controller", 00:31:28.892 "req_id": 1 00:31:28.892 } 00:31:28.892 Got JSON-RPC error response 00:31:28.892 response: 00:31:28.892 { 00:31:28.892 "code": -114, 00:31:28.892 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:28.892 } 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.892 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.892 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:28.892 19:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:30.279 0 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3058485 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3058485 ']' 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3058485 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:30.279 19:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3058485 00:31:30.279 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:30.279 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:30.279 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3058485' 00:31:30.279 killing process with pid 3058485 00:31:30.279 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3058485 00:31:30.279 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3058485 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:31:30.851 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:30.851 [2024-07-22 19:35:46.532761] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:30.851 [2024-07-22 19:35:46.532878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058485 ] 00:31:30.851 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.851 [2024-07-22 19:35:46.648323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.851 [2024-07-22 19:35:46.824857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.851 [2024-07-22 19:35:47.812881] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 5b46e1c6-41bb-4d19-84b0-02822afb49ba already exists 00:31:30.851 [2024-07-22 19:35:47.812927] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:5b46e1c6-41bb-4d19-84b0-02822afb49ba alias for bdev NVMe1n1 00:31:30.851 [2024-07-22 19:35:47.812943] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:30.851 Running I/O for 1 seconds... 00:31:30.851 00:31:30.851 Latency(us) 00:31:30.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.851 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:30.851 NVMe0n1 : 1.00 18427.67 71.98 0.00 0.00 6931.73 4369.07 14745.60 00:31:30.851 =================================================================================================================== 00:31:30.851 Total : 18427.67 71.98 0.00 0.00 6931.73 4369.07 14745.60 00:31:30.851 Received shutdown signal, test time was about 1.000000 seconds 00:31:30.851 00:31:30.851 Latency(us) 00:31:30.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.851 =================================================================================================================== 00:31:30.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:30.851 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:30.851 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:30.851 rmmod nvme_tcp 00:31:30.851 rmmod nvme_fabrics 00:31:31.113 rmmod nvme_keyring 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3058141 ']' 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3058141 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3058141 ']' 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3058141 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3058141 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3058141' 00:31:31.113 killing process with pid 3058141 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3058141 00:31:31.113 19:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3058141 00:31:32.056 19:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:32.056 19:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:32.056 19:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:32.056 19:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:32.056 19:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:32.056 19:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.056 19:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.056 19:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.971 19:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:33.971 00:31:33.971 real 0m14.809s 00:31:33.971 user 0m19.812s 00:31:33.971 sys 0m6.290s 00:31:33.971 19:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:33.971 19:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:33.971 ************************************ 00:31:33.971 END TEST nvmf_multicontroller 00:31:33.971 ************************************ 00:31:33.971 19:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:33.971 19:35:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:33.971 19:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:33.971 19:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:33.971 19:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.971 ************************************ 00:31:33.971 START TEST nvmf_aer 00:31:33.971 ************************************ 00:31:33.971 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:34.233 * Looking for test storage... 00:31:34.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:31:34.233 19:35:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.377 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.377 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:31:42.377 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:42.377 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:42.378 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:42.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:42.378 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:42.378 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.378 19:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:42.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.762 ms 00:31:42.378 00:31:42.378 --- 10.0.0.2 ping statistics --- 00:31:42.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.378 rtt min/avg/max/mdev = 0.762/0.762/0.762/0.000 ms 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:31:42.378 00:31:42.378 --- 10.0.0.1 ping statistics --- 00:31:42.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.378 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3063279 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3063279 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3063279 ']' 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.378 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:42.379 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.379 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:42.379 19:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.379 [2024-07-22 19:36:00.295259] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:42.379 [2024-07-22 19:36:00.295412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.379 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.379 [2024-07-22 19:36:00.432999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:42.379 [2024-07-22 19:36:00.621794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.379 [2024-07-22 19:36:00.621838] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.379 [2024-07-22 19:36:00.621851] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.379 [2024-07-22 19:36:00.621861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.379 [2024-07-22 19:36:00.621871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.379 [2024-07-22 19:36:00.622079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.379 [2024-07-22 19:36:00.622166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:42.379 [2024-07-22 19:36:00.622315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.379 [2024-07-22 19:36:00.622340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.379 [2024-07-22 19:36:01.079776] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.379 Malloc0 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.379 [2024-07-22 19:36:01.176401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.379 [ 00:31:42.379 { 00:31:42.379 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:42.379 "subtype": "Discovery", 00:31:42.379 "listen_addresses": [], 00:31:42.379 "allow_any_host": true, 00:31:42.379 "hosts": [] 00:31:42.379 }, 00:31:42.379 { 00:31:42.379 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.379 "subtype": "NVMe", 00:31:42.379 "listen_addresses": [ 00:31:42.379 { 00:31:42.379 "trtype": "TCP", 00:31:42.379 "adrfam": "IPv4", 00:31:42.379 "traddr": "10.0.0.2", 00:31:42.379 "trsvcid": "4420" 00:31:42.379 } 00:31:42.379 ], 00:31:42.379 "allow_any_host": true, 00:31:42.379 "hosts": [], 00:31:42.379 "serial_number": "SPDK00000000000001", 00:31:42.379 "model_number": "SPDK bdev Controller", 00:31:42.379 "max_namespaces": 2, 00:31:42.379 "min_cntlid": 1, 00:31:42.379 "max_cntlid": 65519, 00:31:42.379 "namespaces": [ 00:31:42.379 { 00:31:42.379 "nsid": 1, 00:31:42.379 "bdev_name": "Malloc0", 00:31:42.379 "name": "Malloc0", 00:31:42.379 "nguid": "CFEC079B90494FA58FD05B01667D3F6E", 00:31:42.379 "uuid": "cfec079b-9049-4fa5-8fd0-5b01667d3f6e" 00:31:42.379 } 00:31:42.379 ] 00:31:42.379 } 00:31:42.379 ] 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3063607 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:42.379 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:31:42.379 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:42.641 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:42.641 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:31:42.641 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:31:42.641 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:42.641 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:42.641 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:31:42.641 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:31:42.641 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.902 Malloc1 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:42.902 [ 00:31:42.902 { 00:31:42.902 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:42.902 "subtype": "Discovery", 00:31:42.902 "listen_addresses": [], 00:31:42.902 "allow_any_host": true, 00:31:42.902 "hosts": [] 00:31:42.902 }, 00:31:42.902 { 00:31:42.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.902 "subtype": "NVMe", 00:31:42.902 "listen_addresses": [ 00:31:42.902 { 00:31:42.902 "trtype": "TCP", 00:31:42.902 "adrfam": "IPv4", 00:31:42.902 "traddr": "10.0.0.2", 00:31:42.902 "trsvcid": "4420" 00:31:42.902 } 00:31:42.902 ], 00:31:42.902 "allow_any_host": true, 00:31:42.902 "hosts": [], 00:31:42.902 "serial_number": "SPDK00000000000001", 00:31:42.902 "model_number": "SPDK bdev Controller", 00:31:42.902 "max_namespaces": 2, 00:31:42.902 "min_cntlid": 1, 00:31:42.902 "max_cntlid": 65519, 00:31:42.902 "namespaces": [ 00:31:42.902 { 00:31:42.902 "nsid": 1, 00:31:42.902 "bdev_name": "Malloc0", 00:31:42.902 "name": "Malloc0", 00:31:42.902 "nguid": "CFEC079B90494FA58FD05B01667D3F6E", 00:31:42.902 "uuid": "cfec079b-9049-4fa5-8fd0-5b01667d3f6e" 00:31:42.902 }, 00:31:42.902 { 00:31:42.902 "nsid": 2, 00:31:42.902 "bdev_name": "Malloc1", 00:31:42.902 "name": "Malloc1", 00:31:42.902 "nguid": "60EADAB5AC774255987E15B902EB056E", 00:31:42.902 "uuid": "60eadab5-ac77-4255-987e-15b902eb056e" 00:31:42.902 } 00:31:42.902 ] 00:31:42.902 } 00:31:42.902 ] 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3063607 00:31:42.902 Asynchronous Event Request test 00:31:42.902 Attaching to 10.0.0.2 00:31:42.902 Attached to 10.0.0.2 00:31:42.902 Registering asynchronous event callbacks... 00:31:42.902 Starting namespace attribute notice tests for all controllers... 00:31:42.902 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:42.902 aer_cb - Changed Namespace 00:31:42.902 Cleaning up... 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.902 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:43.164 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.164 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:43.164 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.164 19:36:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:43.164 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:43.164 rmmod nvme_tcp 00:31:43.164 rmmod nvme_fabrics 00:31:43.164 rmmod nvme_keyring 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3063279 ']' 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3063279 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3063279 ']' 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3063279 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3063279 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3063279' 00:31:43.425 killing process with pid 3063279 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3063279 00:31:43.425 19:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3063279 00:31:44.367 19:36:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:44.367 19:36:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:44.367 19:36:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:44.367 19:36:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.367 19:36:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:44.367 19:36:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.367 19:36:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.367 19:36:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.279 19:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:46.279 00:31:46.279 real 0m12.296s 00:31:46.279 user 0m10.813s 00:31:46.279 sys 0m6.038s 00:31:46.279 19:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:46.279 19:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:46.279 ************************************ 00:31:46.279 END TEST nvmf_aer 00:31:46.279 ************************************ 00:31:46.279 19:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:46.280 19:36:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:46.280 19:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:46.280 19:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:46.280 19:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.280 ************************************ 00:31:46.280 START TEST nvmf_async_init 00:31:46.280 ************************************ 00:31:46.280 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:46.540 * Looking for test storage... 00:31:46.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.540 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3482e4c4340449e694273e1956092cfa 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:31:46.541 19:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:54.686 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:54.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:54.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:54.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:54.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:54.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:31:54.687 00:31:54.687 --- 10.0.0.2 ping statistics --- 00:31:54.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.687 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:31:54.687 00:31:54.687 --- 10.0.0.1 ping statistics --- 00:31:54.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.687 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3068441 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3068441 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3068441 ']' 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:54.687 19:36:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.687 [2024-07-22 19:36:12.616563] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:54.687 [2024-07-22 19:36:12.616691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.687 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.687 [2024-07-22 19:36:12.749285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.687 [2024-07-22 19:36:12.933185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.687 [2024-07-22 19:36:12.933230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.687 [2024-07-22 19:36:12.933245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.687 [2024-07-22 19:36:12.933254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.687 [2024-07-22 19:36:12.933264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.687 [2024-07-22 19:36:12.933289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.687 [2024-07-22 19:36:13.394493] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.687 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.688 null0 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3482e4c4340449e694273e1956092cfa 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.688 [2024-07-22 19:36:13.454765] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.688 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.948 nvme0n1 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.948 [ 00:31:54.948 { 00:31:54.948 "name": "nvme0n1", 00:31:54.948 "aliases": [ 00:31:54.948 "3482e4c4-3404-49e6-9427-3e1956092cfa" 00:31:54.948 ], 00:31:54.948 "product_name": "NVMe disk", 00:31:54.948 "block_size": 512, 00:31:54.948 "num_blocks": 2097152, 00:31:54.948 "uuid": "3482e4c4-3404-49e6-9427-3e1956092cfa", 00:31:54.948 "assigned_rate_limits": { 00:31:54.948 "rw_ios_per_sec": 0, 00:31:54.948 "rw_mbytes_per_sec": 0, 00:31:54.948 "r_mbytes_per_sec": 0, 00:31:54.948 "w_mbytes_per_sec": 0 00:31:54.948 }, 00:31:54.948 "claimed": false, 00:31:54.948 "zoned": false, 00:31:54.948 "supported_io_types": { 00:31:54.948 "read": true, 00:31:54.948 "write": true, 00:31:54.948 "unmap": false, 00:31:54.948 "flush": true, 00:31:54.948 "reset": true, 00:31:54.948 "nvme_admin": true, 00:31:54.948 "nvme_io": true, 00:31:54.948 "nvme_io_md": false, 00:31:54.948 "write_zeroes": true, 00:31:54.948 "zcopy": false, 00:31:54.948 "get_zone_info": false, 00:31:54.948 "zone_management": false, 00:31:54.948 "zone_append": false, 00:31:54.948 "compare": true, 00:31:54.948 "compare_and_write": true, 00:31:54.948 "abort": true, 00:31:54.948 "seek_hole": false, 00:31:54.948 "seek_data": false, 00:31:54.948 "copy": true, 00:31:54.948 "nvme_iov_md": false 00:31:54.948 }, 00:31:54.948 "memory_domains": [ 00:31:54.948 { 00:31:54.948 "dma_device_id": "system", 00:31:54.948 "dma_device_type": 1 00:31:54.948 } 00:31:54.948 ], 00:31:54.948 "driver_specific": { 00:31:54.948 "nvme": [ 00:31:54.948 { 00:31:54.948 "trid": { 00:31:54.948 "trtype": "TCP", 00:31:54.948 "adrfam": "IPv4", 00:31:54.948 "traddr": "10.0.0.2", 00:31:54.948 "trsvcid": "4420", 00:31:54.948 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:54.948 }, 00:31:54.948 "ctrlr_data": { 00:31:54.948 "cntlid": 1, 00:31:54.948 "vendor_id": "0x8086", 00:31:54.948 "model_number": "SPDK bdev Controller", 00:31:54.948 "serial_number": "00000000000000000000", 00:31:54.948 "firmware_revision": "24.09", 00:31:54.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.948 "oacs": { 00:31:54.948 "security": 0, 00:31:54.948 "format": 0, 00:31:54.948 "firmware": 0, 00:31:54.948 "ns_manage": 0 00:31:54.948 }, 00:31:54.948 "multi_ctrlr": true, 00:31:54.948 "ana_reporting": false 00:31:54.948 }, 00:31:54.948 "vs": { 00:31:54.948 "nvme_version": "1.3" 00:31:54.948 }, 00:31:54.948 "ns_data": { 00:31:54.948 "id": 1, 00:31:54.948 "can_share": true 00:31:54.948 } 00:31:54.948 } 00:31:54.948 ], 00:31:54.948 "mp_policy": "active_passive" 00:31:54.948 } 00:31:54.948 } 00:31:54.948 ] 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.948 [2024-07-22 19:36:13.731356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:54.948 [2024-07-22 19:36:13.731443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388900 (9): Bad file descriptor 00:31:54.948 [2024-07-22 19:36:13.863344] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.948 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:54.948 [ 00:31:54.948 { 00:31:54.948 "name": "nvme0n1", 00:31:54.948 "aliases": [ 00:31:54.949 "3482e4c4-3404-49e6-9427-3e1956092cfa" 00:31:54.949 ], 00:31:54.949 "product_name": "NVMe disk", 00:31:54.949 "block_size": 512, 00:31:54.949 "num_blocks": 2097152, 00:31:54.949 "uuid": "3482e4c4-3404-49e6-9427-3e1956092cfa", 00:31:54.949 "assigned_rate_limits": { 00:31:54.949 "rw_ios_per_sec": 0, 00:31:54.949 "rw_mbytes_per_sec": 0, 00:31:54.949 "r_mbytes_per_sec": 0, 00:31:54.949 "w_mbytes_per_sec": 0 00:31:54.949 }, 00:31:54.949 "claimed": false, 00:31:54.949 "zoned": false, 00:31:54.949 "supported_io_types": { 00:31:54.949 "read": true, 00:31:54.949 "write": true, 00:31:54.949 "unmap": false, 00:31:54.949 "flush": true, 00:31:54.949 "reset": true, 00:31:54.949 "nvme_admin": true, 00:31:54.949 "nvme_io": true, 00:31:54.949 "nvme_io_md": false, 00:31:54.949 "write_zeroes": true, 00:31:54.949 "zcopy": false, 00:31:54.949 "get_zone_info": false, 00:31:54.949 "zone_management": false, 00:31:54.949 "zone_append": false, 00:31:54.949 "compare": true, 00:31:54.949 "compare_and_write": true, 00:31:54.949 "abort": true, 00:31:54.949 "seek_hole": false, 00:31:54.949 "seek_data": false, 00:31:54.949 "copy": true, 00:31:54.949 "nvme_iov_md": false 00:31:54.949 }, 00:31:54.949 "memory_domains": [ 00:31:54.949 { 00:31:54.949 "dma_device_id": "system", 00:31:54.949 "dma_device_type": 1 00:31:54.949 } 00:31:54.949 ], 00:31:54.949 "driver_specific": { 00:31:54.949 "nvme": [ 00:31:54.949 { 00:31:54.949 "trid": { 00:31:54.949 "trtype": "TCP", 00:31:54.949 "adrfam": "IPv4", 00:31:54.949 "traddr": "10.0.0.2", 00:31:54.949 "trsvcid": "4420", 00:31:54.949 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:54.949 }, 00:31:54.949 "ctrlr_data": { 00:31:54.949 "cntlid": 2, 00:31:54.949 "vendor_id": "0x8086", 00:31:54.949 "model_number": "SPDK bdev Controller", 00:31:54.949 "serial_number": "00000000000000000000", 00:31:54.949 "firmware_revision": "24.09", 00:31:54.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.949 "oacs": { 00:31:54.949 "security": 0, 00:31:54.949 "format": 0, 00:31:54.949 "firmware": 0, 00:31:54.949 "ns_manage": 0 00:31:54.949 }, 00:31:54.949 "multi_ctrlr": true, 00:31:54.949 "ana_reporting": false 00:31:54.949 }, 00:31:54.949 "vs": { 00:31:54.949 "nvme_version": "1.3" 00:31:54.949 }, 00:31:54.949 "ns_data": { 00:31:54.949 "id": 1, 00:31:54.949 "can_share": true 00:31:54.949 } 00:31:54.949 } 00:31:54.949 ], 00:31:54.949 "mp_policy": "active_passive" 00:31:54.949 } 00:31:54.949 } 00:31:54.949 ] 00:31:54.949 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.949 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.949 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.949 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:55.209 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.209 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ytH8X4DmNG 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ytH8X4DmNG 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:55.210 [2024-07-22 19:36:13.936039] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:55.210 [2024-07-22 19:36:13.936185] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ytH8X4DmNG 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:55.210 [2024-07-22 19:36:13.948055] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ytH8X4DmNG 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.210 19:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:55.210 [2024-07-22 19:36:13.960110] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:55.210 [2024-07-22 19:36:13.960187] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:55.210 nvme0n1 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:55.210 [ 00:31:55.210 { 00:31:55.210 "name": "nvme0n1", 00:31:55.210 "aliases": [ 00:31:55.210 "3482e4c4-3404-49e6-9427-3e1956092cfa" 00:31:55.210 ], 00:31:55.210 "product_name": "NVMe disk", 00:31:55.210 "block_size": 512, 00:31:55.210 "num_blocks": 2097152, 00:31:55.210 "uuid": "3482e4c4-3404-49e6-9427-3e1956092cfa", 00:31:55.210 "assigned_rate_limits": { 00:31:55.210 "rw_ios_per_sec": 0, 00:31:55.210 "rw_mbytes_per_sec": 0, 00:31:55.210 "r_mbytes_per_sec": 0, 00:31:55.210 "w_mbytes_per_sec": 0 00:31:55.210 }, 00:31:55.210 "claimed": false, 00:31:55.210 "zoned": false, 00:31:55.210 "supported_io_types": { 00:31:55.210 "read": true, 00:31:55.210 "write": true, 00:31:55.210 "unmap": false, 00:31:55.210 "flush": true, 00:31:55.210 "reset": true, 00:31:55.210 "nvme_admin": true, 00:31:55.210 "nvme_io": true, 00:31:55.210 "nvme_io_md": false, 00:31:55.210 "write_zeroes": true, 00:31:55.210 "zcopy": false, 00:31:55.210 "get_zone_info": false, 00:31:55.210 "zone_management": false, 00:31:55.210 "zone_append": false, 00:31:55.210 "compare": true, 00:31:55.210 "compare_and_write": true, 00:31:55.210 "abort": true, 00:31:55.210 "seek_hole": false, 00:31:55.210 "seek_data": false, 00:31:55.210 "copy": true, 00:31:55.210 "nvme_iov_md": false 00:31:55.210 }, 00:31:55.210 "memory_domains": [ 00:31:55.210 { 00:31:55.210 "dma_device_id": "system", 00:31:55.210 "dma_device_type": 1 00:31:55.210 } 00:31:55.210 ], 00:31:55.210 "driver_specific": { 00:31:55.210 "nvme": [ 00:31:55.210 { 00:31:55.210 "trid": { 00:31:55.210 "trtype": "TCP", 00:31:55.210 "adrfam": "IPv4", 00:31:55.210 "traddr": "10.0.0.2", 00:31:55.210 "trsvcid": "4421", 00:31:55.210 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:55.210 }, 00:31:55.210 "ctrlr_data": { 00:31:55.210 "cntlid": 3, 00:31:55.210 "vendor_id": "0x8086", 00:31:55.210 "model_number": "SPDK bdev Controller", 00:31:55.210 "serial_number": "00000000000000000000", 00:31:55.210 "firmware_revision": "24.09", 00:31:55.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.210 "oacs": { 00:31:55.210 "security": 0, 00:31:55.210 "format": 0, 00:31:55.210 "firmware": 0, 00:31:55.210 "ns_manage": 0 00:31:55.210 }, 00:31:55.210 "multi_ctrlr": true, 00:31:55.210 "ana_reporting": false 00:31:55.210 }, 00:31:55.210 "vs": { 00:31:55.210 "nvme_version": "1.3" 00:31:55.210 }, 00:31:55.210 "ns_data": { 00:31:55.210 "id": 1, 00:31:55.210 "can_share": true 00:31:55.210 } 00:31:55.210 } 00:31:55.210 ], 00:31:55.210 "mp_policy": "active_passive" 00:31:55.210 } 00:31:55.210 } 00:31:55.210 ] 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ytH8X4DmNG 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:55.210 rmmod nvme_tcp 00:31:55.210 rmmod nvme_fabrics 00:31:55.210 rmmod nvme_keyring 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3068441 ']' 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3068441 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3068441 ']' 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3068441 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:55.210 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068441 00:31:55.477 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:55.477 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:55.477 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068441' 00:31:55.477 killing process with pid 3068441 00:31:55.477 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3068441 00:31:55.477 [2024-07-22 19:36:14.212084] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:55.477 [2024-07-22 19:36:14.212121] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:55.477 19:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3068441 00:31:56.482 19:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:56.482 19:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:56.482 19:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:56.482 19:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:56.482 19:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:56.483 19:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.483 19:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.483 19:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:58.397 00:31:58.397 real 0m11.930s 00:31:58.397 user 0m4.538s 00:31:58.397 sys 0m5.857s 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:58.397 ************************************ 00:31:58.397 END TEST nvmf_async_init 00:31:58.397 ************************************ 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.397 ************************************ 00:31:58.397 START TEST dma 00:31:58.397 ************************************ 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:58.397 * Looking for test storage... 00:31:58.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.397 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:58.659 00:31:58.659 real 0m0.130s 00:31:58.659 user 0m0.063s 00:31:58.659 sys 0m0.074s 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:58.659 ************************************ 00:31:58.659 END TEST dma 00:31:58.659 ************************************ 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.659 ************************************ 00:31:58.659 START TEST nvmf_identify 00:31:58.659 ************************************ 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:58.659 * Looking for test storage... 00:31:58.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.659 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.660 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.660 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:58.660 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:58.660 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:31:58.660 19:36:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:06.804 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:06.805 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:06.805 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:06.805 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:06.805 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:06.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:32:06.805 00:32:06.805 --- 10.0.0.2 ping statistics --- 00:32:06.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.805 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:32:06.805 00:32:06.805 --- 10.0.0.1 ping statistics --- 00:32:06.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.805 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3073160 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3073160 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3073160 ']' 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.805 19:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:06.805 [2024-07-22 19:36:24.787342] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:06.805 [2024-07-22 19:36:24.787447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.805 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.805 [2024-07-22 19:36:24.919620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:06.805 [2024-07-22 19:36:25.103181] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.805 [2024-07-22 19:36:25.103230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.805 [2024-07-22 19:36:25.103243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.805 [2024-07-22 19:36:25.103252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.805 [2024-07-22 19:36:25.103263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.805 [2024-07-22 19:36:25.103439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.805 [2024-07-22 19:36:25.103521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.805 [2024-07-22 19:36:25.103635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.805 [2024-07-22 19:36:25.103661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:06.805 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:06.805 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:32:06.805 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:06.805 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.805 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.805 [2024-07-22 19:36:25.533744] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.806 Malloc0 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.806 [2024-07-22 19:36:25.670680] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:06.806 [ 00:32:06.806 { 00:32:06.806 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:06.806 "subtype": "Discovery", 00:32:06.806 "listen_addresses": [ 00:32:06.806 { 00:32:06.806 "trtype": "TCP", 00:32:06.806 "adrfam": "IPv4", 00:32:06.806 "traddr": "10.0.0.2", 00:32:06.806 "trsvcid": "4420" 00:32:06.806 } 00:32:06.806 ], 00:32:06.806 "allow_any_host": true, 00:32:06.806 "hosts": [] 00:32:06.806 }, 00:32:06.806 { 00:32:06.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.806 "subtype": "NVMe", 00:32:06.806 "listen_addresses": [ 00:32:06.806 { 00:32:06.806 "trtype": "TCP", 00:32:06.806 "adrfam": "IPv4", 00:32:06.806 "traddr": "10.0.0.2", 00:32:06.806 "trsvcid": "4420" 00:32:06.806 } 00:32:06.806 ], 00:32:06.806 "allow_any_host": true, 00:32:06.806 "hosts": [], 00:32:06.806 "serial_number": "SPDK00000000000001", 00:32:06.806 "model_number": "SPDK bdev Controller", 00:32:06.806 "max_namespaces": 32, 00:32:06.806 "min_cntlid": 1, 00:32:06.806 "max_cntlid": 65519, 00:32:06.806 "namespaces": [ 00:32:06.806 { 00:32:06.806 "nsid": 1, 00:32:06.806 "bdev_name": "Malloc0", 00:32:06.806 "name": "Malloc0", 00:32:06.806 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:32:06.806 "eui64": "ABCDEF0123456789", 00:32:06.806 "uuid": "74570d85-c566-4010-927b-88fe5c5133ab" 00:32:06.806 } 00:32:06.806 ] 00:32:06.806 } 00:32:06.806 ] 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.806 19:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:32:06.806 [2024-07-22 19:36:25.753057] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:06.806 [2024-07-22 19:36:25.753146] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073322 ] 00:32:07.070 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.070 [2024-07-22 19:36:25.806699] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:32:07.070 [2024-07-22 19:36:25.806793] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:07.070 [2024-07-22 19:36:25.806809] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:07.070 [2024-07-22 19:36:25.806832] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:07.070 [2024-07-22 19:36:25.806848] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:07.070 [2024-07-22 19:36:25.807379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:32:07.070 [2024-07-22 19:36:25.807429] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025380 0 00:32:07.070 [2024-07-22 19:36:25.814221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:07.070 [2024-07-22 19:36:25.814249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:07.070 [2024-07-22 19:36:25.814257] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:07.070 [2024-07-22 19:36:25.814263] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:07.070 [2024-07-22 19:36:25.814320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.814333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.814343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.070 [2024-07-22 19:36:25.814366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:07.070 [2024-07-22 19:36:25.814391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.070 [2024-07-22 19:36:25.822221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.070 [2024-07-22 19:36:25.822243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.070 [2024-07-22 19:36:25.822250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.822259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.070 [2024-07-22 19:36:25.822278] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:07.070 [2024-07-22 19:36:25.822297] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:32:07.070 [2024-07-22 19:36:25.822306] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:32:07.070 [2024-07-22 19:36:25.822326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.822334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.822341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.070 [2024-07-22 19:36:25.822358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.070 [2024-07-22 19:36:25.822380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.070 [2024-07-22 19:36:25.822642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.070 [2024-07-22 19:36:25.822653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.070 [2024-07-22 19:36:25.822659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.822667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.070 [2024-07-22 19:36:25.822677] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:32:07.070 [2024-07-22 19:36:25.822692] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:32:07.070 [2024-07-22 19:36:25.822704] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.822711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.822717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.070 [2024-07-22 19:36:25.822732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.070 [2024-07-22 19:36:25.822750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.070 [2024-07-22 19:36:25.822957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.070 [2024-07-22 19:36:25.822967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.070 [2024-07-22 19:36:25.822972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.822978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.070 [2024-07-22 19:36:25.822990] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:32:07.070 [2024-07-22 19:36:25.823005] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:32:07.070 [2024-07-22 19:36:25.823015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.823024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.070 [2024-07-22 19:36:25.823030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.070 [2024-07-22 19:36:25.823042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.070 [2024-07-22 19:36:25.823059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.070 [2024-07-22 19:36:25.823285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.071 [2024-07-22 19:36:25.823295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.071 [2024-07-22 19:36:25.823300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.823306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.071 [2024-07-22 19:36:25.823316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:07.071 [2024-07-22 19:36:25.823329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.823336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.823343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.823354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.071 [2024-07-22 19:36:25.823373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.071 [2024-07-22 19:36:25.823597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.071 [2024-07-22 19:36:25.823608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.071 [2024-07-22 19:36:25.823615] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.823621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.071 [2024-07-22 19:36:25.823629] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:32:07.071 [2024-07-22 19:36:25.823638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:32:07.071 [2024-07-22 19:36:25.823649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:07.071 [2024-07-22 19:36:25.823758] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:32:07.071 [2024-07-22 19:36:25.823765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:07.071 [2024-07-22 19:36:25.823778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.823785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.823791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.823803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.071 [2024-07-22 19:36:25.823821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.071 [2024-07-22 19:36:25.824054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.071 [2024-07-22 19:36:25.824066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.071 [2024-07-22 19:36:25.824072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.824084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.071 [2024-07-22 19:36:25.824095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:07.071 [2024-07-22 19:36:25.824113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.824119] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.824126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.824137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.071 [2024-07-22 19:36:25.824152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.071 [2024-07-22 19:36:25.824381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.071 [2024-07-22 19:36:25.824391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.071 [2024-07-22 19:36:25.824396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.824402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.071 [2024-07-22 19:36:25.824410] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:07.071 [2024-07-22 19:36:25.824418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:32:07.071 [2024-07-22 19:36:25.824430] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:32:07.071 [2024-07-22 19:36:25.824444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:32:07.071 [2024-07-22 19:36:25.824463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.824472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.824484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.071 [2024-07-22 19:36:25.824499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.071 [2024-07-22 19:36:25.824813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.071 [2024-07-22 19:36:25.824822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.071 [2024-07-22 19:36:25.824829] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.824837] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=0 00:32:07.071 [2024-07-22 19:36:25.824845] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:07.071 [2024-07-22 19:36:25.824852] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.824870] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.824877] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.071 [2024-07-22 19:36:25.870236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.071 [2024-07-22 19:36:25.870242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.071 [2024-07-22 19:36:25.870273] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:32:07.071 [2024-07-22 19:36:25.870286] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:32:07.071 [2024-07-22 19:36:25.870294] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:32:07.071 [2024-07-22 19:36:25.870303] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:32:07.071 [2024-07-22 19:36:25.870311] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:32:07.071 [2024-07-22 19:36:25.870319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:32:07.071 [2024-07-22 19:36:25.870333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:32:07.071 [2024-07-22 19:36:25.870344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.870377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:07.071 [2024-07-22 19:36:25.870397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.071 [2024-07-22 19:36:25.870596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.071 [2024-07-22 19:36:25.870606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.071 [2024-07-22 19:36:25.870611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.071 [2024-07-22 19:36:25.870636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.870661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.071 [2024-07-22 19:36:25.870673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.870694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.071 [2024-07-22 19:36:25.870703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.870724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.071 [2024-07-22 19:36:25.870732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.870752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.071 [2024-07-22 19:36:25.870759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:32:07.071 [2024-07-22 19:36:25.870776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:07.071 [2024-07-22 19:36:25.870786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.071 [2024-07-22 19:36:25.870793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.071 [2024-07-22 19:36:25.870806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.072 [2024-07-22 19:36:25.870825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.072 [2024-07-22 19:36:25.870833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:32:07.072 [2024-07-22 19:36:25.870840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:32:07.072 [2024-07-22 19:36:25.870847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.072 [2024-07-22 19:36:25.870854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.072 [2024-07-22 19:36:25.871098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.072 [2024-07-22 19:36:25.871108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.072 [2024-07-22 19:36:25.871113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.072 [2024-07-22 19:36:25.871128] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:32:07.072 [2024-07-22 19:36:25.871137] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:32:07.072 [2024-07-22 19:36:25.871155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.072 [2024-07-22 19:36:25.871174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.072 [2024-07-22 19:36:25.871189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.072 [2024-07-22 19:36:25.871416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.072 [2024-07-22 19:36:25.871427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.072 [2024-07-22 19:36:25.871436] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871444] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=4 00:32:07.072 [2024-07-22 19:36:25.871452] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:07.072 [2024-07-22 19:36:25.871459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871470] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871476] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.072 [2024-07-22 19:36:25.871625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.072 [2024-07-22 19:36:25.871631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.072 [2024-07-22 19:36:25.871659] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:32:07.072 [2024-07-22 19:36:25.871701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.072 [2024-07-22 19:36:25.871727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.072 [2024-07-22 19:36:25.871738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.871751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:07.072 [2024-07-22 19:36:25.871761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.072 [2024-07-22 19:36:25.871778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.072 [2024-07-22 19:36:25.871787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:07.072 [2024-07-22 19:36:25.872125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.072 [2024-07-22 19:36:25.872135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.072 [2024-07-22 19:36:25.872141] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.872148] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=1024, cccid=4 00:32:07.072 [2024-07-22 19:36:25.872158] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=1024 00:32:07.072 [2024-07-22 19:36:25.872165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.872175] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.872181] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.872192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.072 [2024-07-22 19:36:25.872205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.072 [2024-07-22 19:36:25.872211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.872217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:07.072 [2024-07-22 19:36:25.913484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.072 [2024-07-22 19:36:25.913504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.072 [2024-07-22 19:36:25.913509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.913516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.072 [2024-07-22 19:36:25.913539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.913548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.072 [2024-07-22 19:36:25.913560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.072 [2024-07-22 19:36:25.913583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.072 [2024-07-22 19:36:25.913765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.072 [2024-07-22 19:36:25.913774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.072 [2024-07-22 19:36:25.913780] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.913794] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=3072, cccid=4 00:32:07.072 [2024-07-22 19:36:25.913801] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=3072 00:32:07.072 [2024-07-22 19:36:25.913807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.913817] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.913825] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.913946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.072 [2024-07-22 19:36:25.913955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.072 [2024-07-22 19:36:25.913960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.913966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.072 [2024-07-22 19:36:25.913982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.913992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.072 [2024-07-22 19:36:25.914004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.072 [2024-07-22 19:36:25.914024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.072 [2024-07-22 19:36:25.918214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.072 [2024-07-22 19:36:25.918230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.072 [2024-07-22 19:36:25.918235] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.918242] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=8, cccid=4 00:32:07.072 [2024-07-22 19:36:25.918249] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=8 00:32:07.072 [2024-07-22 19:36:25.918255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.918265] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.918271] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.958219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.072 [2024-07-22 19:36:25.958237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.072 [2024-07-22 19:36:25.958243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.072 [2024-07-22 19:36:25.958250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.072 ===================================================== 00:32:07.072 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:07.072 ===================================================== 00:32:07.072 Controller Capabilities/Features 00:32:07.072 ================================ 00:32:07.072 Vendor ID: 0000 00:32:07.072 Subsystem Vendor ID: 0000 00:32:07.072 Serial Number: .................... 00:32:07.072 Model Number: ........................................ 00:32:07.072 Firmware Version: 24.09 00:32:07.072 Recommended Arb Burst: 0 00:32:07.072 IEEE OUI Identifier: 00 00 00 00:32:07.072 Multi-path I/O 00:32:07.072 May have multiple subsystem ports: No 00:32:07.072 May have multiple controllers: No 00:32:07.072 Associated with SR-IOV VF: No 00:32:07.072 Max Data Transfer Size: 131072 00:32:07.072 Max Number of Namespaces: 0 00:32:07.072 Max Number of I/O Queues: 1024 00:32:07.072 NVMe Specification Version (VS): 1.3 00:32:07.072 NVMe Specification Version (Identify): 1.3 00:32:07.072 Maximum Queue Entries: 128 00:32:07.072 Contiguous Queues Required: Yes 00:32:07.072 Arbitration Mechanisms Supported 00:32:07.072 Weighted Round Robin: Not Supported 00:32:07.072 Vendor Specific: Not Supported 00:32:07.072 Reset Timeout: 15000 ms 00:32:07.072 Doorbell Stride: 4 bytes 00:32:07.072 NVM Subsystem Reset: Not Supported 00:32:07.072 Command Sets Supported 00:32:07.072 NVM Command Set: Supported 00:32:07.072 Boot Partition: Not Supported 00:32:07.073 Memory Page Size Minimum: 4096 bytes 00:32:07.073 Memory Page Size Maximum: 4096 bytes 00:32:07.073 Persistent Memory Region: Not Supported 00:32:07.073 Optional Asynchronous Events Supported 00:32:07.073 Namespace Attribute Notices: Not Supported 00:32:07.073 Firmware Activation Notices: Not Supported 00:32:07.073 ANA Change Notices: Not Supported 00:32:07.073 PLE Aggregate Log Change Notices: Not Supported 00:32:07.073 LBA Status Info Alert Notices: Not Supported 00:32:07.073 EGE Aggregate Log Change Notices: Not Supported 00:32:07.073 Normal NVM Subsystem Shutdown event: Not Supported 00:32:07.073 Zone Descriptor Change Notices: Not Supported 00:32:07.073 Discovery Log Change Notices: Supported 00:32:07.073 Controller Attributes 00:32:07.073 128-bit Host Identifier: Not Supported 00:32:07.073 Non-Operational Permissive Mode: Not Supported 00:32:07.073 NVM Sets: Not Supported 00:32:07.073 Read Recovery Levels: Not Supported 00:32:07.073 Endurance Groups: Not Supported 00:32:07.073 Predictable Latency Mode: Not Supported 00:32:07.073 Traffic Based Keep ALive: Not Supported 00:32:07.073 Namespace Granularity: Not Supported 00:32:07.073 SQ Associations: Not Supported 00:32:07.073 UUID List: Not Supported 00:32:07.073 Multi-Domain Subsystem: Not Supported 00:32:07.073 Fixed Capacity Management: Not Supported 00:32:07.073 Variable Capacity Management: Not Supported 00:32:07.073 Delete Endurance Group: Not Supported 00:32:07.073 Delete NVM Set: Not Supported 00:32:07.073 Extended LBA Formats Supported: Not Supported 00:32:07.073 Flexible Data Placement Supported: Not Supported 00:32:07.073 00:32:07.073 Controller Memory Buffer Support 00:32:07.073 ================================ 00:32:07.073 Supported: No 00:32:07.073 00:32:07.073 Persistent Memory Region Support 00:32:07.073 ================================ 00:32:07.073 Supported: No 00:32:07.073 00:32:07.073 Admin Command Set Attributes 00:32:07.073 ============================ 00:32:07.073 Security Send/Receive: Not Supported 00:32:07.073 Format NVM: Not Supported 00:32:07.073 Firmware Activate/Download: Not Supported 00:32:07.073 Namespace Management: Not Supported 00:32:07.073 Device Self-Test: Not Supported 00:32:07.073 Directives: Not Supported 00:32:07.073 NVMe-MI: Not Supported 00:32:07.073 Virtualization Management: Not Supported 00:32:07.073 Doorbell Buffer Config: Not Supported 00:32:07.073 Get LBA Status Capability: Not Supported 00:32:07.073 Command & Feature Lockdown Capability: Not Supported 00:32:07.073 Abort Command Limit: 1 00:32:07.073 Async Event Request Limit: 4 00:32:07.073 Number of Firmware Slots: N/A 00:32:07.073 Firmware Slot 1 Read-Only: N/A 00:32:07.073 Firmware Activation Without Reset: N/A 00:32:07.073 Multiple Update Detection Support: N/A 00:32:07.073 Firmware Update Granularity: No Information Provided 00:32:07.073 Per-Namespace SMART Log: No 00:32:07.073 Asymmetric Namespace Access Log Page: Not Supported 00:32:07.073 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:07.073 Command Effects Log Page: Not Supported 00:32:07.073 Get Log Page Extended Data: Supported 00:32:07.073 Telemetry Log Pages: Not Supported 00:32:07.073 Persistent Event Log Pages: Not Supported 00:32:07.073 Supported Log Pages Log Page: May Support 00:32:07.073 Commands Supported & Effects Log Page: Not Supported 00:32:07.073 Feature Identifiers & Effects Log Page:May Support 00:32:07.073 NVMe-MI Commands & Effects Log Page: May Support 00:32:07.073 Data Area 4 for Telemetry Log: Not Supported 00:32:07.073 Error Log Page Entries Supported: 128 00:32:07.073 Keep Alive: Not Supported 00:32:07.073 00:32:07.073 NVM Command Set Attributes 00:32:07.073 ========================== 00:32:07.073 Submission Queue Entry Size 00:32:07.073 Max: 1 00:32:07.073 Min: 1 00:32:07.073 Completion Queue Entry Size 00:32:07.073 Max: 1 00:32:07.073 Min: 1 00:32:07.073 Number of Namespaces: 0 00:32:07.073 Compare Command: Not Supported 00:32:07.073 Write Uncorrectable Command: Not Supported 00:32:07.073 Dataset Management Command: Not Supported 00:32:07.073 Write Zeroes Command: Not Supported 00:32:07.073 Set Features Save Field: Not Supported 00:32:07.073 Reservations: Not Supported 00:32:07.073 Timestamp: Not Supported 00:32:07.073 Copy: Not Supported 00:32:07.073 Volatile Write Cache: Not Present 00:32:07.073 Atomic Write Unit (Normal): 1 00:32:07.073 Atomic Write Unit (PFail): 1 00:32:07.073 Atomic Compare & Write Unit: 1 00:32:07.073 Fused Compare & Write: Supported 00:32:07.073 Scatter-Gather List 00:32:07.073 SGL Command Set: Supported 00:32:07.073 SGL Keyed: Supported 00:32:07.073 SGL Bit Bucket Descriptor: Not Supported 00:32:07.073 SGL Metadata Pointer: Not Supported 00:32:07.073 Oversized SGL: Not Supported 00:32:07.073 SGL Metadata Address: Not Supported 00:32:07.073 SGL Offset: Supported 00:32:07.073 Transport SGL Data Block: Not Supported 00:32:07.073 Replay Protected Memory Block: Not Supported 00:32:07.073 00:32:07.073 Firmware Slot Information 00:32:07.073 ========================= 00:32:07.073 Active slot: 0 00:32:07.073 00:32:07.073 00:32:07.073 Error Log 00:32:07.073 ========= 00:32:07.073 00:32:07.073 Active Namespaces 00:32:07.073 ================= 00:32:07.073 Discovery Log Page 00:32:07.073 ================== 00:32:07.073 Generation Counter: 2 00:32:07.073 Number of Records: 2 00:32:07.073 Record Format: 0 00:32:07.073 00:32:07.073 Discovery Log Entry 0 00:32:07.073 ---------------------- 00:32:07.073 Transport Type: 3 (TCP) 00:32:07.073 Address Family: 1 (IPv4) 00:32:07.073 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:07.073 Entry Flags: 00:32:07.073 Duplicate Returned Information: 1 00:32:07.073 Explicit Persistent Connection Support for Discovery: 1 00:32:07.073 Transport Requirements: 00:32:07.073 Secure Channel: Not Required 00:32:07.073 Port ID: 0 (0x0000) 00:32:07.073 Controller ID: 65535 (0xffff) 00:32:07.073 Admin Max SQ Size: 128 00:32:07.073 Transport Service Identifier: 4420 00:32:07.073 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:07.073 Transport Address: 10.0.0.2 00:32:07.073 Discovery Log Entry 1 00:32:07.073 ---------------------- 00:32:07.073 Transport Type: 3 (TCP) 00:32:07.073 Address Family: 1 (IPv4) 00:32:07.073 Subsystem Type: 2 (NVM Subsystem) 00:32:07.073 Entry Flags: 00:32:07.073 Duplicate Returned Information: 0 00:32:07.073 Explicit Persistent Connection Support for Discovery: 0 00:32:07.073 Transport Requirements: 00:32:07.073 Secure Channel: Not Required 00:32:07.073 Port ID: 0 (0x0000) 00:32:07.073 Controller ID: 65535 (0xffff) 00:32:07.073 Admin Max SQ Size: 128 00:32:07.073 Transport Service Identifier: 4420 00:32:07.073 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:32:07.073 Transport Address: 10.0.0.2 [2024-07-22 19:36:25.958392] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:32:07.073 [2024-07-22 19:36:25.958409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.073 [2024-07-22 19:36:25.958422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.073 [2024-07-22 19:36:25.958430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025380 00:32:07.073 [2024-07-22 19:36:25.958439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.073 [2024-07-22 19:36:25.958446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025380 00:32:07.073 [2024-07-22 19:36:25.958454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.073 [2024-07-22 19:36:25.958461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.073 [2024-07-22 19:36:25.958469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.073 [2024-07-22 19:36:25.958482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.073 [2024-07-22 19:36:25.958489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.073 [2024-07-22 19:36:25.958496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.073 [2024-07-22 19:36:25.958512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.073 [2024-07-22 19:36:25.958535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.073 [2024-07-22 19:36:25.958759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.073 [2024-07-22 19:36:25.958770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.073 [2024-07-22 19:36:25.958776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.073 [2024-07-22 19:36:25.958783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.073 [2024-07-22 19:36:25.958795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.958802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.958808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.958824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.958843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.959090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.959100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.959105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.959119] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:32:07.074 [2024-07-22 19:36:25.959127] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:32:07.074 [2024-07-22 19:36:25.959141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.959165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.959180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.959398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.959409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.959414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.959434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.959457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.959471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.959701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.959710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.959715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959721] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.959734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959740] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.959759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.959773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.959964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.959973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.959978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.959984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.959997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.960019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.960032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.960260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.960270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.960275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.960295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.960316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.960330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.960525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.960539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.960544] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.960564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.960586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.960599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.960825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.960834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.960839] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.960858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.960875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.960886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.960899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.961097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.961105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.961110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.961130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.961154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.961168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.961393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.961403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.961408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.961428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.961456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.961470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.961694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.961703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.961708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.961727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.961749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.961762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.074 [2024-07-22 19:36:25.961944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.074 [2024-07-22 19:36:25.961953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.074 [2024-07-22 19:36:25.961958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.074 [2024-07-22 19:36:25.961977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.074 [2024-07-22 19:36:25.961992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.074 [2024-07-22 19:36:25.962002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.074 [2024-07-22 19:36:25.962015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.075 [2024-07-22 19:36:25.966213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.075 [2024-07-22 19:36:25.966230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.075 [2024-07-22 19:36:25.966236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.075 [2024-07-22 19:36:25.966242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.075 [2024-07-22 19:36:25.966259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.075 [2024-07-22 19:36:25.966265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.075 [2024-07-22 19:36:25.966271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.075 [2024-07-22 19:36:25.966282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.075 [2024-07-22 19:36:25.966301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.075 [2024-07-22 19:36:25.966533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.075 [2024-07-22 19:36:25.966542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.075 [2024-07-22 19:36:25.966547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.075 [2024-07-22 19:36:25.966553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.075 [2024-07-22 19:36:25.966565] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:32:07.075 00:32:07.075 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:32:07.339 [2024-07-22 19:36:26.060067] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:07.339 [2024-07-22 19:36:26.060142] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073454 ] 00:32:07.339 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.339 [2024-07-22 19:36:26.113583] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:32:07.339 [2024-07-22 19:36:26.113677] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:07.339 [2024-07-22 19:36:26.113687] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:07.339 [2024-07-22 19:36:26.113706] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:07.339 [2024-07-22 19:36:26.113724] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:07.339 [2024-07-22 19:36:26.114247] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:32:07.339 [2024-07-22 19:36:26.114295] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025380 0 00:32:07.339 [2024-07-22 19:36:26.128218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:07.339 [2024-07-22 19:36:26.128242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:07.339 [2024-07-22 19:36:26.128250] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:07.339 [2024-07-22 19:36:26.128260] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:07.339 [2024-07-22 19:36:26.128310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.128323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.128330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.339 [2024-07-22 19:36:26.128353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:07.339 [2024-07-22 19:36:26.128384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.339 [2024-07-22 19:36:26.136218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.339 [2024-07-22 19:36:26.136240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.339 [2024-07-22 19:36:26.136247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.136256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.339 [2024-07-22 19:36:26.136273] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:07.339 [2024-07-22 19:36:26.136289] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:32:07.339 [2024-07-22 19:36:26.136299] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:32:07.339 [2024-07-22 19:36:26.136316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.136327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.136334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.339 [2024-07-22 19:36:26.136348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.339 [2024-07-22 19:36:26.136370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.339 [2024-07-22 19:36:26.136614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.339 [2024-07-22 19:36:26.136625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.339 [2024-07-22 19:36:26.136636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.136642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.339 [2024-07-22 19:36:26.136653] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:32:07.339 [2024-07-22 19:36:26.136665] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:32:07.339 [2024-07-22 19:36:26.136675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.136682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.136689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.339 [2024-07-22 19:36:26.136703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.339 [2024-07-22 19:36:26.136721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.339 [2024-07-22 19:36:26.136960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.339 [2024-07-22 19:36:26.136972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.339 [2024-07-22 19:36:26.136977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.136983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.339 [2024-07-22 19:36:26.136992] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:32:07.339 [2024-07-22 19:36:26.137008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:32:07.339 [2024-07-22 19:36:26.137019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.137025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.137031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.339 [2024-07-22 19:36:26.137047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.339 [2024-07-22 19:36:26.137062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.339 [2024-07-22 19:36:26.137251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.339 [2024-07-22 19:36:26.137261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.339 [2024-07-22 19:36:26.137266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.137272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.339 [2024-07-22 19:36:26.137281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:07.339 [2024-07-22 19:36:26.137295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.137302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.137308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.339 [2024-07-22 19:36:26.137322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.339 [2024-07-22 19:36:26.137337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.339 [2024-07-22 19:36:26.137564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.339 [2024-07-22 19:36:26.137574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.339 [2024-07-22 19:36:26.137579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.339 [2024-07-22 19:36:26.137589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.339 [2024-07-22 19:36:26.137597] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:32:07.340 [2024-07-22 19:36:26.137605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:32:07.340 [2024-07-22 19:36:26.137617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:07.340 [2024-07-22 19:36:26.137725] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:32:07.340 [2024-07-22 19:36:26.137732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:07.340 [2024-07-22 19:36:26.137745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.137751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.137757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.340 [2024-07-22 19:36:26.137771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.340 [2024-07-22 19:36:26.137786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.340 [2024-07-22 19:36:26.138021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.340 [2024-07-22 19:36:26.138030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.340 [2024-07-22 19:36:26.138036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.138047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.340 [2024-07-22 19:36:26.138058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:07.340 [2024-07-22 19:36:26.138072] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.138079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.138086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.340 [2024-07-22 19:36:26.138097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.340 [2024-07-22 19:36:26.138115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.340 [2024-07-22 19:36:26.138382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.340 [2024-07-22 19:36:26.138391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.340 [2024-07-22 19:36:26.138397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.138406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.340 [2024-07-22 19:36:26.138414] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:07.340 [2024-07-22 19:36:26.138422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:32:07.340 [2024-07-22 19:36:26.138433] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:32:07.340 [2024-07-22 19:36:26.138445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:32:07.340 [2024-07-22 19:36:26.138461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.138468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.340 [2024-07-22 19:36:26.138480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.340 [2024-07-22 19:36:26.138495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.340 [2024-07-22 19:36:26.138744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.340 [2024-07-22 19:36:26.138755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.340 [2024-07-22 19:36:26.138760] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.138767] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=0 00:32:07.340 [2024-07-22 19:36:26.138775] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:07.340 [2024-07-22 19:36:26.138785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.138858] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.138866] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.340 [2024-07-22 19:36:26.139060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.340 [2024-07-22 19:36:26.139065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.340 [2024-07-22 19:36:26.139088] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:32:07.340 [2024-07-22 19:36:26.139098] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:32:07.340 [2024-07-22 19:36:26.139108] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:32:07.340 [2024-07-22 19:36:26.139116] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:32:07.340 [2024-07-22 19:36:26.139123] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:32:07.340 [2024-07-22 19:36:26.139131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:32:07.340 [2024-07-22 19:36:26.139143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:32:07.340 [2024-07-22 19:36:26.139153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.340 [2024-07-22 19:36:26.139183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:07.340 [2024-07-22 19:36:26.139204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.340 [2024-07-22 19:36:26.139405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.340 [2024-07-22 19:36:26.139415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.340 [2024-07-22 19:36:26.139424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.340 [2024-07-22 19:36:26.139443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:07.340 [2024-07-22 19:36:26.139471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.340 [2024-07-22 19:36:26.139481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025380) 00:32:07.340 [2024-07-22 19:36:26.139501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.340 [2024-07-22 19:36:26.139509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025380) 00:32:07.340 [2024-07-22 19:36:26.139535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.340 [2024-07-22 19:36:26.139543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.340 [2024-07-22 19:36:26.139564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.340 [2024-07-22 19:36:26.139571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:32:07.340 [2024-07-22 19:36:26.139588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:07.340 [2024-07-22 19:36:26.139598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.340 [2024-07-22 19:36:26.139607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.340 [2024-07-22 19:36:26.139622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.340 [2024-07-22 19:36:26.139640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:07.340 [2024-07-22 19:36:26.139649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:32:07.340 [2024-07-22 19:36:26.139657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:32:07.340 [2024-07-22 19:36:26.139664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.341 [2024-07-22 19:36:26.139670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.341 [2024-07-22 19:36:26.139965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.341 [2024-07-22 19:36:26.139974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.341 [2024-07-22 19:36:26.139979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.139987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.341 [2024-07-22 19:36:26.139996] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:32:07.341 [2024-07-22 19:36:26.140004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.140018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.140028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.140038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.140045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.140052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.341 [2024-07-22 19:36:26.140063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:07.341 [2024-07-22 19:36:26.140077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.341 [2024-07-22 19:36:26.144216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.341 [2024-07-22 19:36:26.144237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.341 [2024-07-22 19:36:26.144243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.144250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.341 [2024-07-22 19:36:26.144341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.144362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.144376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.144383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.341 [2024-07-22 19:36:26.144396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.341 [2024-07-22 19:36:26.144414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.341 [2024-07-22 19:36:26.144622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.341 [2024-07-22 19:36:26.144632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.341 [2024-07-22 19:36:26.144640] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.144646] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=4 00:32:07.341 [2024-07-22 19:36:26.144654] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:07.341 [2024-07-22 19:36:26.144660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.144695] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.144701] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.185415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.341 [2024-07-22 19:36:26.185434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.341 [2024-07-22 19:36:26.185440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.185447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.341 [2024-07-22 19:36:26.185477] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:32:07.341 [2024-07-22 19:36:26.185494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.185508] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.185522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.185529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.341 [2024-07-22 19:36:26.185544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.341 [2024-07-22 19:36:26.185562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.341 [2024-07-22 19:36:26.185678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.341 [2024-07-22 19:36:26.185688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.341 [2024-07-22 19:36:26.185694] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.185700] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=4 00:32:07.341 [2024-07-22 19:36:26.185707] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:07.341 [2024-07-22 19:36:26.185714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.185748] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.185755] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.227389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.341 [2024-07-22 19:36:26.227408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.341 [2024-07-22 19:36:26.227414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.227421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.341 [2024-07-22 19:36:26.227445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.227464] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.227480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.227486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.341 [2024-07-22 19:36:26.227501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.341 [2024-07-22 19:36:26.227524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.341 [2024-07-22 19:36:26.227714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.341 [2024-07-22 19:36:26.227723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.341 [2024-07-22 19:36:26.227729] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.227735] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=4 00:32:07.341 [2024-07-22 19:36:26.227741] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:07.341 [2024-07-22 19:36:26.227748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.227788] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.227794] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.272214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.341 [2024-07-22 19:36:26.272233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.341 [2024-07-22 19:36:26.272239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.272245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.341 [2024-07-22 19:36:26.272263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.272276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.272289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.272298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.272308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.272317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.272324] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:32:07.341 [2024-07-22 19:36:26.272332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:32:07.341 [2024-07-22 19:36:26.272340] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:32:07.341 [2024-07-22 19:36:26.272371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.272379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.341 [2024-07-22 19:36:26.272393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.341 [2024-07-22 19:36:26.272404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.272410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.272416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:07.341 [2024-07-22 19:36:26.272426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.341 [2024-07-22 19:36:26.272448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.341 [2024-07-22 19:36:26.272456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:07.341 [2024-07-22 19:36:26.272681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.341 [2024-07-22 19:36:26.272691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.341 [2024-07-22 19:36:26.272697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.341 [2024-07-22 19:36:26.272704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.341 [2024-07-22 19:36:26.272717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.341 [2024-07-22 19:36:26.272725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.342 [2024-07-22 19:36:26.272730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.272736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:07.342 [2024-07-22 19:36:26.272749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.272755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:07.342 [2024-07-22 19:36:26.272766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.342 [2024-07-22 19:36:26.272781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:07.342 [2024-07-22 19:36:26.273000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.342 [2024-07-22 19:36:26.273009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.342 [2024-07-22 19:36:26.273014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.273020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:07.342 [2024-07-22 19:36:26.273032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.273039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:07.342 [2024-07-22 19:36:26.273049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.342 [2024-07-22 19:36:26.273062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:07.342 [2024-07-22 19:36:26.273271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.342 [2024-07-22 19:36:26.273280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.342 [2024-07-22 19:36:26.273286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.273292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:07.342 [2024-07-22 19:36:26.273304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.273310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:07.342 [2024-07-22 19:36:26.273320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.342 [2024-07-22 19:36:26.273333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:07.342 [2024-07-22 19:36:26.273539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.342 [2024-07-22 19:36:26.273548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.342 [2024-07-22 19:36:26.273553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.273559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:07.342 [2024-07-22 19:36:26.273585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.273592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:07.342 [2024-07-22 19:36:26.273603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.342 [2024-07-22 19:36:26.273619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.273625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:07.342 [2024-07-22 19:36:26.273636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.342 [2024-07-22 19:36:26.273649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.273655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000025380) 00:32:07.342 [2024-07-22 19:36:26.273665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.342 [2024-07-22 19:36:26.273678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.273685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025380) 00:32:07.342 [2024-07-22 19:36:26.273699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.342 [2024-07-22 19:36:26.273714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:07.342 [2024-07-22 19:36:26.273723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:07.342 [2024-07-22 19:36:26.273730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:32:07.342 [2024-07-22 19:36:26.273736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:32:07.342 [2024-07-22 19:36:26.274005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.342 [2024-07-22 19:36:26.274015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.342 [2024-07-22 19:36:26.274020] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274027] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=8192, cccid=5 00:32:07.342 [2024-07-22 19:36:26.274037] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000025380): expected_datao=0, payload_size=8192 00:32:07.342 [2024-07-22 19:36:26.274044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274175] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274182] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.342 [2024-07-22 19:36:26.274199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.342 [2024-07-22 19:36:26.274211] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274217] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=512, cccid=4 00:32:07.342 [2024-07-22 19:36:26.274223] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=512 00:32:07.342 [2024-07-22 19:36:26.274229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274244] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274249] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.342 [2024-07-22 19:36:26.274265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.342 [2024-07-22 19:36:26.274270] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274276] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=512, cccid=6 00:32:07.342 [2024-07-22 19:36:26.274282] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000025380): expected_datao=0, payload_size=512 00:32:07.342 [2024-07-22 19:36:26.274290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274299] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274304] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:07.342 [2024-07-22 19:36:26.274320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:07.342 [2024-07-22 19:36:26.274325] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274331] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=7 00:32:07.342 [2024-07-22 19:36:26.274337] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:07.342 [2024-07-22 19:36:26.274343] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274352] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274358] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.342 [2024-07-22 19:36:26.274446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.342 [2024-07-22 19:36:26.274451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:07.342 [2024-07-22 19:36:26.274480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.342 [2024-07-22 19:36:26.274488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.342 [2024-07-22 19:36:26.274493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:07.342 [2024-07-22 19:36:26.274511] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.342 [2024-07-22 19:36:26.274521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.342 [2024-07-22 19:36:26.274526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000025380 00:32:07.342 [2024-07-22 19:36:26.274543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.342 [2024-07-22 19:36:26.274557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.342 [2024-07-22 19:36:26.274562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.342 [2024-07-22 19:36:26.274568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025380 00:32:07.342 ===================================================== 00:32:07.342 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:07.342 ===================================================== 00:32:07.342 Controller Capabilities/Features 00:32:07.342 ================================ 00:32:07.342 Vendor ID: 8086 00:32:07.343 Subsystem Vendor ID: 8086 00:32:07.343 Serial Number: SPDK00000000000001 00:32:07.343 Model Number: SPDK bdev Controller 00:32:07.343 Firmware Version: 24.09 00:32:07.343 Recommended Arb Burst: 6 00:32:07.343 IEEE OUI Identifier: e4 d2 5c 00:32:07.343 Multi-path I/O 00:32:07.343 May have multiple subsystem ports: Yes 00:32:07.343 May have multiple controllers: Yes 00:32:07.343 Associated with SR-IOV VF: No 00:32:07.343 Max Data Transfer Size: 131072 00:32:07.343 Max Number of Namespaces: 32 00:32:07.343 Max Number of I/O Queues: 127 00:32:07.343 NVMe Specification Version (VS): 1.3 00:32:07.343 NVMe Specification Version (Identify): 1.3 00:32:07.343 Maximum Queue Entries: 128 00:32:07.343 Contiguous Queues Required: Yes 00:32:07.343 Arbitration Mechanisms Supported 00:32:07.343 Weighted Round Robin: Not Supported 00:32:07.343 Vendor Specific: Not Supported 00:32:07.343 Reset Timeout: 15000 ms 00:32:07.343 Doorbell Stride: 4 bytes 00:32:07.343 NVM Subsystem Reset: Not Supported 00:32:07.343 Command Sets Supported 00:32:07.343 NVM Command Set: Supported 00:32:07.343 Boot Partition: Not Supported 00:32:07.343 Memory Page Size Minimum: 4096 bytes 00:32:07.343 Memory Page Size Maximum: 4096 bytes 00:32:07.343 Persistent Memory Region: Not Supported 00:32:07.343 Optional Asynchronous Events Supported 00:32:07.343 Namespace Attribute Notices: Supported 00:32:07.343 Firmware Activation Notices: Not Supported 00:32:07.343 ANA Change Notices: Not Supported 00:32:07.343 PLE Aggregate Log Change Notices: Not Supported 00:32:07.343 LBA Status Info Alert Notices: Not Supported 00:32:07.343 EGE Aggregate Log Change Notices: Not Supported 00:32:07.343 Normal NVM Subsystem Shutdown event: Not Supported 00:32:07.343 Zone Descriptor Change Notices: Not Supported 00:32:07.343 Discovery Log Change Notices: Not Supported 00:32:07.343 Controller Attributes 00:32:07.343 128-bit Host Identifier: Supported 00:32:07.343 Non-Operational Permissive Mode: Not Supported 00:32:07.343 NVM Sets: Not Supported 00:32:07.343 Read Recovery Levels: Not Supported 00:32:07.343 Endurance Groups: Not Supported 00:32:07.343 Predictable Latency Mode: Not Supported 00:32:07.343 Traffic Based Keep ALive: Not Supported 00:32:07.343 Namespace Granularity: Not Supported 00:32:07.343 SQ Associations: Not Supported 00:32:07.343 UUID List: Not Supported 00:32:07.343 Multi-Domain Subsystem: Not Supported 00:32:07.343 Fixed Capacity Management: Not Supported 00:32:07.343 Variable Capacity Management: Not Supported 00:32:07.343 Delete Endurance Group: Not Supported 00:32:07.343 Delete NVM Set: Not Supported 00:32:07.343 Extended LBA Formats Supported: Not Supported 00:32:07.343 Flexible Data Placement Supported: Not Supported 00:32:07.343 00:32:07.343 Controller Memory Buffer Support 00:32:07.343 ================================ 00:32:07.343 Supported: No 00:32:07.343 00:32:07.343 Persistent Memory Region Support 00:32:07.343 ================================ 00:32:07.343 Supported: No 00:32:07.343 00:32:07.343 Admin Command Set Attributes 00:32:07.343 ============================ 00:32:07.343 Security Send/Receive: Not Supported 00:32:07.343 Format NVM: Not Supported 00:32:07.343 Firmware Activate/Download: Not Supported 00:32:07.343 Namespace Management: Not Supported 00:32:07.343 Device Self-Test: Not Supported 00:32:07.343 Directives: Not Supported 00:32:07.343 NVMe-MI: Not Supported 00:32:07.343 Virtualization Management: Not Supported 00:32:07.343 Doorbell Buffer Config: Not Supported 00:32:07.343 Get LBA Status Capability: Not Supported 00:32:07.343 Command & Feature Lockdown Capability: Not Supported 00:32:07.343 Abort Command Limit: 4 00:32:07.343 Async Event Request Limit: 4 00:32:07.343 Number of Firmware Slots: N/A 00:32:07.343 Firmware Slot 1 Read-Only: N/A 00:32:07.343 Firmware Activation Without Reset: N/A 00:32:07.343 Multiple Update Detection Support: N/A 00:32:07.343 Firmware Update Granularity: No Information Provided 00:32:07.343 Per-Namespace SMART Log: No 00:32:07.343 Asymmetric Namespace Access Log Page: Not Supported 00:32:07.343 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:32:07.343 Command Effects Log Page: Supported 00:32:07.343 Get Log Page Extended Data: Supported 00:32:07.343 Telemetry Log Pages: Not Supported 00:32:07.343 Persistent Event Log Pages: Not Supported 00:32:07.343 Supported Log Pages Log Page: May Support 00:32:07.343 Commands Supported & Effects Log Page: Not Supported 00:32:07.343 Feature Identifiers & Effects Log Page:May Support 00:32:07.343 NVMe-MI Commands & Effects Log Page: May Support 00:32:07.343 Data Area 4 for Telemetry Log: Not Supported 00:32:07.343 Error Log Page Entries Supported: 128 00:32:07.343 Keep Alive: Supported 00:32:07.343 Keep Alive Granularity: 10000 ms 00:32:07.343 00:32:07.343 NVM Command Set Attributes 00:32:07.343 ========================== 00:32:07.343 Submission Queue Entry Size 00:32:07.343 Max: 64 00:32:07.343 Min: 64 00:32:07.343 Completion Queue Entry Size 00:32:07.343 Max: 16 00:32:07.343 Min: 16 00:32:07.343 Number of Namespaces: 32 00:32:07.343 Compare Command: Supported 00:32:07.343 Write Uncorrectable Command: Not Supported 00:32:07.343 Dataset Management Command: Supported 00:32:07.343 Write Zeroes Command: Supported 00:32:07.343 Set Features Save Field: Not Supported 00:32:07.343 Reservations: Supported 00:32:07.343 Timestamp: Not Supported 00:32:07.343 Copy: Supported 00:32:07.343 Volatile Write Cache: Present 00:32:07.343 Atomic Write Unit (Normal): 1 00:32:07.343 Atomic Write Unit (PFail): 1 00:32:07.343 Atomic Compare & Write Unit: 1 00:32:07.343 Fused Compare & Write: Supported 00:32:07.343 Scatter-Gather List 00:32:07.343 SGL Command Set: Supported 00:32:07.343 SGL Keyed: Supported 00:32:07.343 SGL Bit Bucket Descriptor: Not Supported 00:32:07.343 SGL Metadata Pointer: Not Supported 00:32:07.343 Oversized SGL: Not Supported 00:32:07.343 SGL Metadata Address: Not Supported 00:32:07.343 SGL Offset: Supported 00:32:07.343 Transport SGL Data Block: Not Supported 00:32:07.343 Replay Protected Memory Block: Not Supported 00:32:07.343 00:32:07.343 Firmware Slot Information 00:32:07.343 ========================= 00:32:07.343 Active slot: 1 00:32:07.343 Slot 1 Firmware Revision: 24.09 00:32:07.343 00:32:07.343 00:32:07.343 Commands Supported and Effects 00:32:07.343 ============================== 00:32:07.343 Admin Commands 00:32:07.343 -------------- 00:32:07.343 Get Log Page (02h): Supported 00:32:07.343 Identify (06h): Supported 00:32:07.343 Abort (08h): Supported 00:32:07.343 Set Features (09h): Supported 00:32:07.343 Get Features (0Ah): Supported 00:32:07.343 Asynchronous Event Request (0Ch): Supported 00:32:07.343 Keep Alive (18h): Supported 00:32:07.343 I/O Commands 00:32:07.343 ------------ 00:32:07.343 Flush (00h): Supported LBA-Change 00:32:07.343 Write (01h): Supported LBA-Change 00:32:07.343 Read (02h): Supported 00:32:07.343 Compare (05h): Supported 00:32:07.343 Write Zeroes (08h): Supported LBA-Change 00:32:07.343 Dataset Management (09h): Supported LBA-Change 00:32:07.343 Copy (19h): Supported LBA-Change 00:32:07.343 00:32:07.343 Error Log 00:32:07.343 ========= 00:32:07.343 00:32:07.343 Arbitration 00:32:07.343 =========== 00:32:07.343 Arbitration Burst: 1 00:32:07.343 00:32:07.343 Power Management 00:32:07.343 ================ 00:32:07.343 Number of Power States: 1 00:32:07.343 Current Power State: Power State #0 00:32:07.343 Power State #0: 00:32:07.343 Max Power: 0.00 W 00:32:07.343 Non-Operational State: Operational 00:32:07.343 Entry Latency: Not Reported 00:32:07.343 Exit Latency: Not Reported 00:32:07.343 Relative Read Throughput: 0 00:32:07.343 Relative Read Latency: 0 00:32:07.343 Relative Write Throughput: 0 00:32:07.343 Relative Write Latency: 0 00:32:07.343 Idle Power: Not Reported 00:32:07.343 Active Power: Not Reported 00:32:07.343 Non-Operational Permissive Mode: Not Supported 00:32:07.344 00:32:07.344 Health Information 00:32:07.344 ================== 00:32:07.344 Critical Warnings: 00:32:07.344 Available Spare Space: OK 00:32:07.344 Temperature: OK 00:32:07.344 Device Reliability: OK 00:32:07.344 Read Only: No 00:32:07.344 Volatile Memory Backup: OK 00:32:07.344 Current Temperature: 0 Kelvin (-273 Celsius) 00:32:07.344 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:32:07.344 Available Spare: 0% 00:32:07.344 Available Spare Threshold: 0% 00:32:07.344 Life Percentage Used:[2024-07-22 19:36:26.274728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.274738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025380) 00:32:07.344 [2024-07-22 19:36:26.274750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.344 [2024-07-22 19:36:26.274768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:32:07.344 [2024-07-22 19:36:26.274983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.344 [2024-07-22 19:36:26.274996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.344 [2024-07-22 19:36:26.275002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.275008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.275057] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:32:07.344 [2024-07-22 19:36:26.275071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.275085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.344 [2024-07-22 19:36:26.275093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.275102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.344 [2024-07-22 19:36:26.275109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.275117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.344 [2024-07-22 19:36:26.275124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.275132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.344 [2024-07-22 19:36:26.275144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.275150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.275157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.344 [2024-07-22 19:36:26.275168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.344 [2024-07-22 19:36:26.275187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.344 [2024-07-22 19:36:26.275399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.344 [2024-07-22 19:36:26.275409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.344 [2024-07-22 19:36:26.275415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.275421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.275433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.275440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.275446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.344 [2024-07-22 19:36:26.275458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.344 [2024-07-22 19:36:26.275479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.344 [2024-07-22 19:36:26.275706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.344 [2024-07-22 19:36:26.275715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.344 [2024-07-22 19:36:26.275720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.275726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.275734] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:32:07.344 [2024-07-22 19:36:26.275742] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:32:07.344 [2024-07-22 19:36:26.275755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.275762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.275768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.344 [2024-07-22 19:36:26.275779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.344 [2024-07-22 19:36:26.275793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.344 [2024-07-22 19:36:26.275983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.344 [2024-07-22 19:36:26.275992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.344 [2024-07-22 19:36:26.276002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.276008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.276023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.276029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.276035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.344 [2024-07-22 19:36:26.276045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.344 [2024-07-22 19:36:26.276058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.344 [2024-07-22 19:36:26.280214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.344 [2024-07-22 19:36:26.280232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.344 [2024-07-22 19:36:26.280237] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.280243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.280260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.280267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.280273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:07.344 [2024-07-22 19:36:26.280287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.344 [2024-07-22 19:36:26.280306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:07.344 [2024-07-22 19:36:26.280511] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:07.344 [2024-07-22 19:36:26.280523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:07.344 [2024-07-22 19:36:26.280529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:07.344 [2024-07-22 19:36:26.280534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:07.344 [2024-07-22 19:36:26.280546] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:32:07.605 0% 00:32:07.605 Data Units Read: 0 00:32:07.605 Data Units Written: 0 00:32:07.605 Host Read Commands: 0 00:32:07.605 Host Write Commands: 0 00:32:07.605 Controller Busy Time: 0 minutes 00:32:07.605 Power Cycles: 0 00:32:07.605 Power On Hours: 0 hours 00:32:07.605 Unsafe Shutdowns: 0 00:32:07.605 Unrecoverable Media Errors: 0 00:32:07.605 Lifetime Error Log Entries: 0 00:32:07.605 Warning Temperature Time: 0 minutes 00:32:07.605 Critical Temperature Time: 0 minutes 00:32:07.605 00:32:07.605 Number of Queues 00:32:07.605 ================ 00:32:07.605 Number of I/O Submission Queues: 127 00:32:07.605 Number of I/O Completion Queues: 127 00:32:07.605 00:32:07.605 Active Namespaces 00:32:07.605 ================= 00:32:07.605 Namespace ID:1 00:32:07.605 Error Recovery Timeout: Unlimited 00:32:07.605 Command Set Identifier: NVM (00h) 00:32:07.605 Deallocate: Supported 00:32:07.605 Deallocated/Unwritten Error: Not Supported 00:32:07.605 Deallocated Read Value: Unknown 00:32:07.605 Deallocate in Write Zeroes: Not Supported 00:32:07.605 Deallocated Guard Field: 0xFFFF 00:32:07.605 Flush: Supported 00:32:07.605 Reservation: Supported 00:32:07.605 Namespace Sharing Capabilities: Multiple Controllers 00:32:07.605 Size (in LBAs): 131072 (0GiB) 00:32:07.605 Capacity (in LBAs): 131072 (0GiB) 00:32:07.605 Utilization (in LBAs): 131072 (0GiB) 00:32:07.605 NGUID: ABCDEF0123456789ABCDEF0123456789 00:32:07.605 EUI64: ABCDEF0123456789 00:32:07.605 UUID: 74570d85-c566-4010-927b-88fe5c5133ab 00:32:07.605 Thin Provisioning: Not Supported 00:32:07.605 Per-NS Atomic Units: Yes 00:32:07.605 Atomic Boundary Size (Normal): 0 00:32:07.606 Atomic Boundary Size (PFail): 0 00:32:07.606 Atomic Boundary Offset: 0 00:32:07.606 Maximum Single Source Range Length: 65535 00:32:07.606 Maximum Copy Length: 65535 00:32:07.606 Maximum Source Range Count: 1 00:32:07.606 NGUID/EUI64 Never Reused: No 00:32:07.606 Namespace Write Protected: No 00:32:07.606 Number of LBA Formats: 1 00:32:07.606 Current LBA Format: LBA Format #00 00:32:07.606 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:07.606 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:07.606 rmmod nvme_tcp 00:32:07.606 rmmod nvme_fabrics 00:32:07.606 rmmod nvme_keyring 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3073160 ']' 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3073160 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3073160 ']' 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3073160 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3073160 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3073160' 00:32:07.606 killing process with pid 3073160 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3073160 00:32:07.606 19:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3073160 00:32:08.547 19:36:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:08.547 19:36:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:08.547 19:36:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:08.547 19:36:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:08.547 19:36:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:08.548 19:36:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.548 19:36:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.548 19:36:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:11.094 00:32:11.094 real 0m12.111s 00:32:11.094 user 0m10.629s 00:32:11.094 sys 0m5.799s 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:11.094 ************************************ 00:32:11.094 END TEST nvmf_identify 00:32:11.094 ************************************ 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.094 ************************************ 00:32:11.094 START TEST nvmf_perf 00:32:11.094 ************************************ 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:11.094 * Looking for test storage... 00:32:11.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:11.094 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:11.095 19:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:32:17.679 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:17.680 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:17.680 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:17.680 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:17.680 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.680 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:17.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:32:17.941 00:32:17.941 --- 10.0.0.2 ping statistics --- 00:32:17.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.941 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:32:17.941 00:32:17.941 --- 10.0.0.1 ping statistics --- 00:32:17.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.941 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3077704 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3077704 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3077704 ']' 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:17.941 19:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:17.941 [2024-07-22 19:36:36.801301] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:17.941 [2024-07-22 19:36:36.801419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.941 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.201 [2024-07-22 19:36:36.939264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:18.201 [2024-07-22 19:36:37.122374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.201 [2024-07-22 19:36:37.122424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.201 [2024-07-22 19:36:37.122436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.201 [2024-07-22 19:36:37.122446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.201 [2024-07-22 19:36:37.122457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.201 [2024-07-22 19:36:37.122641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.201 [2024-07-22 19:36:37.122724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.201 [2024-07-22 19:36:37.122841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.201 [2024-07-22 19:36:37.122867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.773 19:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.773 19:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:32:18.773 19:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:18.773 19:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:18.773 19:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:18.773 19:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.773 19:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:18.773 19:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:19.345 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:32:19.345 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:19.345 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:32:19.345 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:19.606 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:19.606 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:32:19.606 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:19.606 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:19.606 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:19.867 [2024-07-22 19:36:38.640951] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.867 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:20.127 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:20.127 19:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:20.127 19:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:20.127 19:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:20.390 19:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:20.390 [2024-07-22 19:36:39.311538] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.651 19:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:20.651 19:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:32:20.651 19:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:20.651 19:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:20.651 19:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:22.037 Initializing NVMe Controllers 00:32:22.037 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:32:22.037 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:32:22.037 Initialization complete. Launching workers. 00:32:22.037 ======================================================== 00:32:22.038 Latency(us) 00:32:22.038 Device Information : IOPS MiB/s Average min max 00:32:22.038 PCIE (0000:65:00.0) NSID 1 from core 0: 74256.89 290.07 430.37 14.12 4965.07 00:32:22.038 ======================================================== 00:32:22.038 Total : 74256.89 290.07 430.37 14.12 4965.07 00:32:22.038 00:32:22.038 19:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:22.299 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.685 Initializing NVMe Controllers 00:32:23.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:23.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:23.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:23.685 Initialization complete. Launching workers. 00:32:23.685 ======================================================== 00:32:23.685 Latency(us) 00:32:23.685 Device Information : IOPS MiB/s Average min max 00:32:23.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.00 0.38 10603.30 362.41 45814.58 00:32:23.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16468.07 7273.78 47909.36 00:32:23.685 ======================================================== 00:32:23.685 Total : 158.00 0.62 12867.54 362.41 47909.36 00:32:23.685 00:32:23.685 19:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:23.685 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.070 Initializing NVMe Controllers 00:32:25.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:25.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:25.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:25.071 Initialization complete. Launching workers. 00:32:25.071 ======================================================== 00:32:25.071 Latency(us) 00:32:25.071 Device Information : IOPS MiB/s Average min max 00:32:25.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9651.98 37.70 3316.02 445.25 6589.91 00:32:25.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3875.99 15.14 8312.11 6766.32 16230.59 00:32:25.071 ======================================================== 00:32:25.071 Total : 13527.97 52.84 4747.48 445.25 16230.59 00:32:25.071 00:32:25.071 19:36:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:32:25.071 19:36:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:32:25.071 19:36:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:25.071 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.423 Initializing NVMe Controllers 00:32:28.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.423 Controller IO queue size 128, less than required. 00:32:28.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.423 Controller IO queue size 128, less than required. 00:32:28.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:28.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:28.423 Initialization complete. Launching workers. 00:32:28.423 ======================================================== 00:32:28.423 Latency(us) 00:32:28.423 Device Information : IOPS MiB/s Average min max 00:32:28.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1022.41 255.60 129503.79 73503.01 247052.50 00:32:28.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 567.12 141.78 242314.64 103430.93 419513.46 00:32:28.423 ======================================================== 00:32:28.423 Total : 1589.53 397.38 169752.89 73503.01 419513.46 00:32:28.423 00:32:28.423 19:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:28.423 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.423 No valid NVMe controllers or AIO or URING devices found 00:32:28.423 Initializing NVMe Controllers 00:32:28.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.423 Controller IO queue size 128, less than required. 00:32:28.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.423 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:28.423 Controller IO queue size 128, less than required. 00:32:28.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.423 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:28.423 WARNING: Some requested NVMe devices were skipped 00:32:28.423 19:36:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:28.423 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.967 Initializing NVMe Controllers 00:32:30.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:30.967 Controller IO queue size 128, less than required. 00:32:30.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:30.967 Controller IO queue size 128, less than required. 00:32:30.967 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:30.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:30.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:30.967 Initialization complete. Launching workers. 00:32:30.967 00:32:30.967 ==================== 00:32:30.967 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:30.967 TCP transport: 00:32:30.967 polls: 15097 00:32:30.967 idle_polls: 5028 00:32:30.967 sock_completions: 10069 00:32:30.967 nvme_completions: 6897 00:32:30.967 submitted_requests: 10296 00:32:30.967 queued_requests: 1 00:32:30.967 00:32:30.967 ==================== 00:32:30.967 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:30.967 TCP transport: 00:32:30.967 polls: 17962 00:32:30.967 idle_polls: 8083 00:32:30.967 sock_completions: 9879 00:32:30.967 nvme_completions: 4129 00:32:30.967 submitted_requests: 6202 00:32:30.967 queued_requests: 1 00:32:30.967 ======================================================== 00:32:30.967 Latency(us) 00:32:30.967 Device Information : IOPS MiB/s Average min max 00:32:30.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1723.99 431.00 77157.53 40810.48 235570.92 00:32:30.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1032.00 258.00 127958.49 80876.55 321636.49 00:32:30.967 ======================================================== 00:32:30.967 Total : 2755.99 689.00 96180.24 40810.48 321636.49 00:32:30.967 00:32:30.967 19:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:30.967 19:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:31.227 19:36:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:31.227 19:36:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:32:31.227 19:36:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:32.170 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=962dbd73-9f5a-4645-ad16-bf82756917a9 00:32:32.170 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 962dbd73-9f5a-4645-ad16-bf82756917a9 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=962dbd73-9f5a-4645-ad16-bf82756917a9 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:32.431 { 00:32:32.431 "uuid": "962dbd73-9f5a-4645-ad16-bf82756917a9", 00:32:32.431 "name": "lvs_0", 00:32:32.431 "base_bdev": "Nvme0n1", 00:32:32.431 "total_data_clusters": 457407, 00:32:32.431 "free_clusters": 457407, 00:32:32.431 "block_size": 512, 00:32:32.431 "cluster_size": 4194304 00:32:32.431 } 00:32:32.431 ]' 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="962dbd73-9f5a-4645-ad16-bf82756917a9") .free_clusters' 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="962dbd73-9f5a-4645-ad16-bf82756917a9") .cluster_size' 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:32:32.431 1829628 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:32.431 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 962dbd73-9f5a-4645-ad16-bf82756917a9 lbd_0 20480 00:32:32.692 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8bde979f-ac5f-48f6-9274-0166e266ada4 00:32:32.692 19:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8bde979f-ac5f-48f6-9274-0166e266ada4 lvs_n_0 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=75cb4cc0-20d3-4e84-bb3d-c7fb46ff2af8 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 75cb4cc0-20d3-4e84-bb3d-c7fb46ff2af8 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=75cb4cc0-20d3-4e84-bb3d-c7fb46ff2af8 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:34.604 { 00:32:34.604 "uuid": "962dbd73-9f5a-4645-ad16-bf82756917a9", 00:32:34.604 "name": "lvs_0", 00:32:34.604 "base_bdev": "Nvme0n1", 00:32:34.604 "total_data_clusters": 457407, 00:32:34.604 "free_clusters": 452287, 00:32:34.604 "block_size": 512, 00:32:34.604 "cluster_size": 4194304 00:32:34.604 }, 00:32:34.604 { 00:32:34.604 "uuid": "75cb4cc0-20d3-4e84-bb3d-c7fb46ff2af8", 00:32:34.604 "name": "lvs_n_0", 00:32:34.604 "base_bdev": "8bde979f-ac5f-48f6-9274-0166e266ada4", 00:32:34.604 "total_data_clusters": 5114, 00:32:34.604 "free_clusters": 5114, 00:32:34.604 "block_size": 512, 00:32:34.604 "cluster_size": 4194304 00:32:34.604 } 00:32:34.604 ]' 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="75cb4cc0-20d3-4e84-bb3d-c7fb46ff2af8") .free_clusters' 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="75cb4cc0-20d3-4e84-bb3d-c7fb46ff2af8") .cluster_size' 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:32:34.604 20456 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:34.604 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 75cb4cc0-20d3-4e84-bb3d-c7fb46ff2af8 lbd_nest_0 20456 00:32:34.864 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=87a2c419-7c37-4042-960e-e0c558ef21cc 00:32:34.865 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:34.865 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:34.865 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 87a2c419-7c37-4042-960e-e0c558ef21cc 00:32:35.125 19:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:35.386 19:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:35.386 19:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:35.386 19:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:35.386 19:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:35.386 19:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:35.386 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.621 Initializing NVMe Controllers 00:32:47.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:47.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:47.621 Initialization complete. Launching workers. 00:32:47.621 ======================================================== 00:32:47.621 Latency(us) 00:32:47.621 Device Information : IOPS MiB/s Average min max 00:32:47.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.90 0.02 21376.45 320.46 47886.79 00:32:47.621 ======================================================== 00:32:47.621 Total : 46.90 0.02 21376.45 320.46 47886.79 00:32:47.621 00:32:47.621 19:37:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:47.621 19:37:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:47.621 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.619 Initializing NVMe Controllers 00:32:57.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:57.619 Initialization complete. Launching workers. 00:32:57.619 ======================================================== 00:32:57.619 Latency(us) 00:32:57.619 Device Information : IOPS MiB/s Average min max 00:32:57.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 70.60 8.82 14174.03 7493.43 48885.85 00:32:57.619 ======================================================== 00:32:57.619 Total : 70.60 8.82 14174.03 7493.43 48885.85 00:32:57.619 00:32:57.619 19:37:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:57.619 19:37:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:57.619 19:37:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:57.619 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.621 Initializing NVMe Controllers 00:33:07.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:07.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:07.621 Initialization complete. Launching workers. 00:33:07.621 ======================================================== 00:33:07.621 Latency(us) 00:33:07.621 Device Information : IOPS MiB/s Average min max 00:33:07.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8242.50 4.02 3882.06 339.59 8139.46 00:33:07.621 ======================================================== 00:33:07.621 Total : 8242.50 4.02 3882.06 339.59 8139.46 00:33:07.621 00:33:07.621 19:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:07.621 19:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:07.621 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.681 Initializing NVMe Controllers 00:33:17.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:17.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:17.681 Initialization complete. Launching workers. 00:33:17.681 ======================================================== 00:33:17.681 Latency(us) 00:33:17.681 Device Information : IOPS MiB/s Average min max 00:33:17.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2116.37 264.55 15133.12 886.32 37252.48 00:33:17.681 ======================================================== 00:33:17.681 Total : 2116.37 264.55 15133.12 886.32 37252.48 00:33:17.681 00:33:17.681 19:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:17.681 19:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:17.681 19:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:17.681 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.681 Initializing NVMe Controllers 00:33:27.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:27.681 Controller IO queue size 128, less than required. 00:33:27.681 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:27.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:27.681 Initialization complete. Launching workers. 00:33:27.681 ======================================================== 00:33:27.681 Latency(us) 00:33:27.681 Device Information : IOPS MiB/s Average min max 00:33:27.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15628.77 7.63 8192.76 1778.36 22487.83 00:33:27.681 ======================================================== 00:33:27.681 Total : 15628.77 7.63 8192.76 1778.36 22487.83 00:33:27.681 00:33:27.681 19:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:27.681 19:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:27.681 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.678 Initializing NVMe Controllers 00:33:37.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:37.678 Controller IO queue size 128, less than required. 00:33:37.678 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:37.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:37.678 Initialization complete. Launching workers. 00:33:37.678 ======================================================== 00:33:37.678 Latency(us) 00:33:37.678 Device Information : IOPS MiB/s Average min max 00:33:37.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1166.74 145.84 110015.30 23561.72 234074.44 00:33:37.678 ======================================================== 00:33:37.678 Total : 1166.74 145.84 110015.30 23561.72 234074.44 00:33:37.678 00:33:37.678 19:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:37.940 19:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 87a2c419-7c37-4042-960e-e0c558ef21cc 00:33:39.853 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:39.853 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8bde979f-ac5f-48f6-9274-0166e266ada4 00:33:39.853 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:40.114 rmmod nvme_tcp 00:33:40.114 rmmod nvme_fabrics 00:33:40.114 rmmod nvme_keyring 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3077704 ']' 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3077704 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3077704 ']' 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3077704 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:40.114 19:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3077704 00:33:40.114 19:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:40.114 19:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:40.114 19:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3077704' 00:33:40.114 killing process with pid 3077704 00:33:40.114 19:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3077704 00:33:40.114 19:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3077704 00:33:43.414 19:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:43.414 19:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:43.414 19:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:43.414 19:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:43.414 19:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:43.414 19:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.414 19:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.414 19:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.328 19:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:45.328 00:33:45.328 real 1m34.166s 00:33:45.328 user 5m34.355s 00:33:45.328 sys 0m14.201s 00:33:45.328 19:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:45.329 ************************************ 00:33:45.329 END TEST nvmf_perf 00:33:45.329 ************************************ 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.329 ************************************ 00:33:45.329 START TEST nvmf_fio_host 00:33:45.329 ************************************ 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:45.329 * Looking for test storage... 00:33:45.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.329 19:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:45.329 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.330 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.330 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.330 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:45.330 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:45.330 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:45.330 19:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.920 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:51.921 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:51.921 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:51.921 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:51.921 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.921 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.183 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.183 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.183 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:52.183 19:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:52.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:33:52.183 00:33:52.183 --- 10.0.0.2 ping statistics --- 00:33:52.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.183 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:33:52.183 00:33:52.183 --- 10.0.0.1 ping statistics --- 00:33:52.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.183 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3097653 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3097653 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3097653 ']' 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:52.183 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.447 [2024-07-22 19:38:11.200256] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:52.447 [2024-07-22 19:38:11.200383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.447 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.447 [2024-07-22 19:38:11.320820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:52.753 [2024-07-22 19:38:11.505255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.753 [2024-07-22 19:38:11.505296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.753 [2024-07-22 19:38:11.505310] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.753 [2024-07-22 19:38:11.505320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.753 [2024-07-22 19:38:11.505330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.753 [2024-07-22 19:38:11.505509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.753 [2024-07-22 19:38:11.505625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:52.753 [2024-07-22 19:38:11.505768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.753 [2024-07-22 19:38:11.505794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.014 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:53.014 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:33:53.014 19:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:53.275 [2024-07-22 19:38:12.097314] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.275 19:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:53.275 19:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:53.275 19:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.275 19:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:53.536 Malloc1 00:33:53.536 19:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:53.797 19:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:53.797 19:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.059 [2024-07-22 19:38:12.855954] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.059 19:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:54.320 19:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:54.580 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:54.580 fio-3.35 00:33:54.580 Starting 1 thread 00:33:54.840 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.385 00:33:57.385 test: (groupid=0, jobs=1): err= 0: pid=3098466: Mon Jul 22 19:38:15 2024 00:33:57.385 read: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(91.8MiB/2005msec) 00:33:57.385 slat (usec): min=2, max=313, avg= 2.41, stdev= 2.96 00:33:57.385 clat (usec): min=4254, max=9952, avg=6016.70, stdev=955.80 00:33:57.385 lat (usec): min=4256, max=9954, avg=6019.11, stdev=955.87 00:33:57.385 clat percentiles (usec): 00:33:57.385 | 1.00th=[ 4817], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:33:57.385 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5866], 00:33:57.385 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 7832], 95.00th=[ 8291], 00:33:57.385 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9634], 99.95th=[ 9634], 00:33:57.385 | 99.99th=[ 9896] 00:33:57.385 bw ( KiB/s): min=36320, max=50672, per=100.00%, avg=46876.00, stdev=7045.67, samples=4 00:33:57.385 iops : min= 9080, max=12668, avg=11719.00, stdev=1761.42, samples=4 00:33:57.385 write: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(91.2MiB/2005msec); 0 zone resets 00:33:57.385 slat (usec): min=2, max=336, avg= 2.51, stdev= 2.41 00:33:57.385 clat (usec): min=3325, max=9398, avg=4864.06, stdev=792.92 00:33:57.385 lat (usec): min=3328, max=9401, avg=4866.57, stdev=793.02 00:33:57.385 clat percentiles (usec): 00:33:57.385 | 1.00th=[ 3851], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4359], 00:33:57.385 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4752], 00:33:57.385 | 70.00th=[ 4817], 80.00th=[ 5014], 90.00th=[ 6390], 95.00th=[ 6718], 00:33:57.385 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 7963], 99.95th=[ 8455], 00:33:57.385 | 99.99th=[ 9372] 00:33:57.385 bw ( KiB/s): min=37264, max=50240, per=99.96%, avg=46548.00, stdev=6204.61, samples=4 00:33:57.385 iops : min= 9316, max=12560, avg=11637.00, stdev=1551.15, samples=4 00:33:57.385 lat (msec) : 4=1.46%, 10=98.54% 00:33:57.385 cpu : usr=70.86%, sys=25.40%, ctx=17, majf=0, minf=1525 00:33:57.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:57.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.385 issued rwts: total=23497,23342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.385 00:33:57.385 Run status group 0 (all jobs): 00:33:57.385 READ: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=91.8MiB (96.2MB), run=2005-2005msec 00:33:57.385 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.2MiB (95.6MB), run=2005-2005msec 00:33:57.385 ----------------------------------------------------- 00:33:57.385 Suppressions used: 00:33:57.385 count bytes template 00:33:57.385 1 57 /usr/src/fio/parse.c 00:33:57.385 1 8 libtcmalloc_minimal.so 00:33:57.385 ----------------------------------------------------- 00:33:57.385 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:57.385 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:57.386 19:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:57.646 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:57.646 fio-3.35 00:33:57.646 Starting 1 thread 00:33:57.646 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.189 00:34:00.189 test: (groupid=0, jobs=1): err= 0: pid=3099141: Mon Jul 22 19:38:18 2024 00:34:00.189 read: IOPS=8179, BW=128MiB/s (134MB/s)(256MiB/2003msec) 00:34:00.189 slat (usec): min=3, max=117, avg= 3.92, stdev= 1.49 00:34:00.189 clat (usec): min=2831, max=55208, avg=9485.42, stdev=4163.46 00:34:00.189 lat (usec): min=2834, max=55212, avg=9489.34, stdev=4163.49 00:34:00.189 clat percentiles (usec): 00:34:00.189 | 1.00th=[ 5014], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7242], 00:34:00.189 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:34:00.189 | 70.00th=[10421], 80.00th=[11076], 90.00th=[11994], 95.00th=[12518], 00:34:00.189 | 99.00th=[16057], 99.50th=[49546], 99.90th=[53740], 99.95th=[54264], 00:34:00.189 | 99.99th=[55313] 00:34:00.189 bw ( KiB/s): min=51808, max=78432, per=51.17%, avg=66968.00, stdev=13210.13, samples=4 00:34:00.189 iops : min= 3238, max= 4902, avg=4185.50, stdev=825.63, samples=4 00:34:00.189 write: IOPS=4944, BW=77.3MiB/s (81.0MB/s)(137MiB/1773msec); 0 zone resets 00:34:00.189 slat (usec): min=40, max=277, avg=41.71, stdev= 5.21 00:34:00.189 clat (usec): min=2685, max=17301, avg=10438.08, stdev=1667.98 00:34:00.189 lat (usec): min=2725, max=17346, avg=10479.79, stdev=1668.24 00:34:00.189 clat percentiles (usec): 00:34:00.189 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 9110], 00:34:00.189 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10683], 00:34:00.189 | 70.00th=[11076], 80.00th=[11469], 90.00th=[12518], 95.00th=[13566], 00:34:00.189 | 99.00th=[15533], 99.50th=[16319], 99.90th=[17171], 99.95th=[17171], 00:34:00.189 | 99.99th=[17171] 00:34:00.189 bw ( KiB/s): min=54304, max=81440, per=87.95%, avg=69584.00, stdev=13714.04, samples=4 00:34:00.189 iops : min= 3394, max= 5090, avg=4349.00, stdev=857.13, samples=4 00:34:00.189 lat (msec) : 4=0.14%, 10=56.33%, 20=43.02%, 50=0.20%, 100=0.31% 00:34:00.189 cpu : usr=83.97%, sys=13.84%, ctx=15, majf=0, minf=2213 00:34:00.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:34:00.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.189 issued rwts: total=16383,8767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.189 00:34:00.189 Run status group 0 (all jobs): 00:34:00.189 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=256MiB (268MB), run=2003-2003msec 00:34:00.189 WRITE: bw=77.3MiB/s (81.0MB/s), 77.3MiB/s-77.3MiB/s (81.0MB/s-81.0MB/s), io=137MiB (144MB), run=1773-1773msec 00:34:00.449 ----------------------------------------------------- 00:34:00.449 Suppressions used: 00:34:00.449 count bytes template 00:34:00.449 1 57 /usr/src/fio/parse.c 00:34:00.449 813 78048 /usr/src/fio/iolog.c 00:34:00.449 1 8 libtcmalloc_minimal.so 00:34:00.449 ----------------------------------------------------- 00:34:00.449 00:34:00.449 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:00.449 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:34:00.450 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:34:00.450 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:34:00.450 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:00.450 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:34:00.450 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:00.450 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:00.450 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:00.710 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:00.710 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:34:00.710 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:34:01.280 Nvme0n1 00:34:01.280 19:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=b701162f-52fc-44fd-ac41-eed7ca0e1d33 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb b701162f-52fc-44fd-ac41-eed7ca0e1d33 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=b701162f-52fc-44fd-ac41-eed7ca0e1d33 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:01.852 { 00:34:01.852 "uuid": "b701162f-52fc-44fd-ac41-eed7ca0e1d33", 00:34:01.852 "name": "lvs_0", 00:34:01.852 "base_bdev": "Nvme0n1", 00:34:01.852 "total_data_clusters": 1787, 00:34:01.852 "free_clusters": 1787, 00:34:01.852 "block_size": 512, 00:34:01.852 "cluster_size": 1073741824 00:34:01.852 } 00:34:01.852 ]' 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b701162f-52fc-44fd-ac41-eed7ca0e1d33") .free_clusters' 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:34:01.852 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b701162f-52fc-44fd-ac41-eed7ca0e1d33") .cluster_size' 00:34:02.114 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:34:02.114 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:34:02.114 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:34:02.114 1829888 00:34:02.114 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:34:02.114 a123c515-501e-418a-9f7f-541795c6434b 00:34:02.114 19:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:34:02.375 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:34:02.375 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:02.636 19:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:03.220 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:03.220 fio-3.35 00:34:03.220 Starting 1 thread 00:34:03.220 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.760 00:34:05.760 test: (groupid=0, jobs=1): err= 0: pid=3100281: Mon Jul 22 19:38:24 2024 00:34:05.760 read: IOPS=9334, BW=36.5MiB/s (38.2MB/s)(73.1MiB/2006msec) 00:34:05.760 slat (usec): min=2, max=122, avg= 2.46, stdev= 1.21 00:34:05.760 clat (usec): min=2891, max=12348, avg=7551.34, stdev=571.50 00:34:05.760 lat (usec): min=2913, max=12351, avg=7553.80, stdev=571.43 00:34:05.760 clat percentiles (usec): 00:34:05.760 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:34:05.760 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:34:05.760 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:34:05.760 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[10421], 99.95th=[11207], 00:34:05.760 | 99.99th=[12387] 00:34:05.760 bw ( KiB/s): min=36000, max=38056, per=99.95%, avg=37322.00, stdev=905.79, samples=4 00:34:05.760 iops : min= 9000, max= 9514, avg=9330.50, stdev=226.45, samples=4 00:34:05.760 write: IOPS=9340, BW=36.5MiB/s (38.3MB/s)(73.2MiB/2006msec); 0 zone resets 00:34:05.760 slat (nsec): min=2329, max=116908, avg=2586.70, stdev=924.29 00:34:05.760 clat (usec): min=1377, max=11147, avg=6043.01, stdev=496.66 00:34:05.760 lat (usec): min=1386, max=11150, avg=6045.59, stdev=496.63 00:34:05.760 clat percentiles (usec): 00:34:05.760 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:34:05.760 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:34:05.760 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6783], 00:34:05.760 | 99.00th=[ 7111], 99.50th=[ 7242], 99.90th=[ 9503], 99.95th=[10290], 00:34:05.760 | 99.99th=[11076] 00:34:05.760 bw ( KiB/s): min=36856, max=37744, per=99.97%, avg=37350.00, stdev=378.31, samples=4 00:34:05.760 iops : min= 9214, max= 9436, avg=9337.50, stdev=94.58, samples=4 00:34:05.760 lat (msec) : 2=0.01%, 4=0.10%, 10=99.81%, 20=0.09% 00:34:05.760 cpu : usr=69.38%, sys=27.48%, ctx=49, majf=0, minf=1523 00:34:05.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:05.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:05.760 issued rwts: total=18726,18737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:05.760 00:34:05.760 Run status group 0 (all jobs): 00:34:05.760 READ: bw=36.5MiB/s (38.2MB/s), 36.5MiB/s-36.5MiB/s (38.2MB/s-38.2MB/s), io=73.1MiB (76.7MB), run=2006-2006msec 00:34:05.760 WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.2MiB (76.7MB), run=2006-2006msec 00:34:05.760 ----------------------------------------------------- 00:34:05.760 Suppressions used: 00:34:05.760 count bytes template 00:34:05.760 1 58 /usr/src/fio/parse.c 00:34:05.760 1 8 libtcmalloc_minimal.so 00:34:05.760 ----------------------------------------------------- 00:34:05.760 00:34:05.760 19:38:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:06.020 19:38:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:34:06.592 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=a3befec9-ff46-4d6e-877c-3bb7de1ad13d 00:34:06.592 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb a3befec9-ff46-4d6e-877c-3bb7de1ad13d 00:34:06.592 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a3befec9-ff46-4d6e-877c-3bb7de1ad13d 00:34:06.592 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:06.592 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:34:06.592 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:34:06.592 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:06.852 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:06.852 { 00:34:06.852 "uuid": "b701162f-52fc-44fd-ac41-eed7ca0e1d33", 00:34:06.852 "name": "lvs_0", 00:34:06.852 "base_bdev": "Nvme0n1", 00:34:06.852 "total_data_clusters": 1787, 00:34:06.852 "free_clusters": 0, 00:34:06.852 "block_size": 512, 00:34:06.852 "cluster_size": 1073741824 00:34:06.852 }, 00:34:06.852 { 00:34:06.852 "uuid": "a3befec9-ff46-4d6e-877c-3bb7de1ad13d", 00:34:06.852 "name": "lvs_n_0", 00:34:06.852 "base_bdev": "a123c515-501e-418a-9f7f-541795c6434b", 00:34:06.852 "total_data_clusters": 457025, 00:34:06.852 "free_clusters": 457025, 00:34:06.852 "block_size": 512, 00:34:06.852 "cluster_size": 4194304 00:34:06.852 } 00:34:06.852 ]' 00:34:06.852 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a3befec9-ff46-4d6e-877c-3bb7de1ad13d") .free_clusters' 00:34:06.852 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:34:06.852 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a3befec9-ff46-4d6e-877c-3bb7de1ad13d") .cluster_size' 00:34:06.852 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:34:06.852 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:34:06.852 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:34:06.852 1828100 00:34:06.853 19:38:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:34:09.397 0ed82c31-c42c-4968-9c14-a5b0bdc66b15 00:34:09.398 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:34:09.398 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:09.658 19:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:10.241 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:10.241 fio-3.35 00:34:10.241 Starting 1 thread 00:34:10.241 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.792 00:34:12.792 test: (groupid=0, jobs=1): err= 0: pid=3101757: Mon Jul 22 19:38:31 2024 00:34:12.792 read: IOPS=5742, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2010msec) 00:34:12.792 slat (usec): min=2, max=120, avg= 2.45, stdev= 1.56 00:34:12.792 clat (usec): min=4300, max=20941, avg=12332.07, stdev=1028.85 00:34:12.792 lat (usec): min=4320, max=20944, avg=12334.52, stdev=1028.73 00:34:12.792 clat percentiles (usec): 00:34:12.792 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11600], 00:34:12.792 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:34:12.792 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:34:12.792 | 99.00th=[14615], 99.50th=[14746], 99.90th=[18482], 99.95th=[19530], 00:34:12.792 | 99.99th=[20841] 00:34:12.792 bw ( KiB/s): min=21824, max=23480, per=99.93%, avg=22954.00, stdev=764.73, samples=4 00:34:12.792 iops : min= 5456, max= 5870, avg=5738.50, stdev=191.18, samples=4 00:34:12.792 write: IOPS=5731, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2010msec); 0 zone resets 00:34:12.792 slat (usec): min=2, max=109, avg= 2.56, stdev= 1.14 00:34:12.792 clat (usec): min=2055, max=19687, avg=9815.30, stdev=933.57 00:34:12.792 lat (usec): min=2065, max=19689, avg=9817.86, stdev=933.50 00:34:12.792 clat percentiles (usec): 00:34:12.792 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9110], 00:34:12.792 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:34:12.792 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:34:12.792 | 99.00th=[11731], 99.50th=[11994], 99.90th=[17171], 99.95th=[18744], 00:34:12.792 | 99.99th=[19792] 00:34:12.792 bw ( KiB/s): min=22784, max=23088, per=99.96%, avg=22916.00, stdev=128.91, samples=4 00:34:12.792 iops : min= 5696, max= 5772, avg=5729.00, stdev=32.23, samples=4 00:34:12.792 lat (msec) : 4=0.04%, 10=30.48%, 20=69.46%, 50=0.01% 00:34:12.792 cpu : usr=70.03%, sys=27.87%, ctx=34, majf=0, minf=1524 00:34:12.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:34:12.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:12.793 issued rwts: total=11542,11520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:12.793 00:34:12.793 Run status group 0 (all jobs): 00:34:12.793 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2010-2010msec 00:34:12.793 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2010-2010msec 00:34:12.793 ----------------------------------------------------- 00:34:12.793 Suppressions used: 00:34:12.793 count bytes template 00:34:12.793 1 58 /usr/src/fio/parse.c 00:34:12.793 1 8 libtcmalloc_minimal.so 00:34:12.793 ----------------------------------------------------- 00:34:12.793 00:34:12.793 19:38:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:13.054 19:38:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:34:13.054 19:38:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:34:16.389 19:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:16.389 19:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:34:16.961 19:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:17.222 19:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:19.135 rmmod nvme_tcp 00:34:19.135 rmmod nvme_fabrics 00:34:19.135 rmmod nvme_keyring 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3097653 ']' 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3097653 00:34:19.135 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3097653 ']' 00:34:19.136 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3097653 00:34:19.136 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:34:19.397 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:19.397 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3097653 00:34:19.397 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:19.397 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:19.397 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3097653' 00:34:19.397 killing process with pid 3097653 00:34:19.397 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3097653 00:34:19.397 19:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3097653 00:34:20.339 19:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:20.339 19:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:20.339 19:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:20.339 19:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:20.339 19:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:20.339 19:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.339 19:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.339 19:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:22.888 00:34:22.888 real 0m37.348s 00:34:22.888 user 3m3.006s 00:34:22.888 sys 0m12.354s 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.888 ************************************ 00:34:22.888 END TEST nvmf_fio_host 00:34:22.888 ************************************ 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.888 ************************************ 00:34:22.888 START TEST nvmf_failover 00:34:22.888 ************************************ 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:22.888 * Looking for test storage... 00:34:22.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.888 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:34:22.889 19:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:29.478 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:29.478 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:29.479 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:29.479 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:29.479 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:29.479 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:29.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:29.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.751 ms 00:34:29.739 00:34:29.739 --- 10.0.0.2 ping statistics --- 00:34:29.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.739 rtt min/avg/max/mdev = 0.751/0.751/0.751/0.000 ms 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:29.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:29.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:34:29.739 00:34:29.739 --- 10.0.0.1 ping statistics --- 00:34:29.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.739 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3107655 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3107655 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3107655 ']' 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:29.739 19:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:29.739 [2024-07-22 19:38:48.646542] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:29.739 [2024-07-22 19:38:48.646669] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.000 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.000 [2024-07-22 19:38:48.797518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:30.261 [2024-07-22 19:38:49.027808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.261 [2024-07-22 19:38:49.027874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.261 [2024-07-22 19:38:49.027889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.261 [2024-07-22 19:38:49.027900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.261 [2024-07-22 19:38:49.027911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.261 [2024-07-22 19:38:49.028089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:30.261 [2024-07-22 19:38:49.028297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.261 [2024-07-22 19:38:49.028327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:30.521 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:30.521 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:30.521 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:30.521 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:30.521 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:30.521 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.522 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:30.783 [2024-07-22 19:38:49.561282] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.783 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:31.044 Malloc0 00:34:31.044 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:31.044 19:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:31.304 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.564 [2024-07-22 19:38:50.286916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.564 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:31.564 [2024-07-22 19:38:50.455373] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:31.564 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:31.824 [2024-07-22 19:38:50.615853] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:31.824 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3108020 00:34:31.824 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:31.824 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:31.824 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3108020 /var/tmp/bdevperf.sock 00:34:31.824 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3108020 ']' 00:34:31.825 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:31.825 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:31.825 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:31.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:31.825 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:31.825 19:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:32.768 19:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:32.768 19:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:32.768 19:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:33.030 NVMe0n1 00:34:33.030 19:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:33.313 00:34:33.313 19:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3108356 00:34:33.313 19:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:33.313 19:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:34.254 19:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:34.516 [2024-07-22 19:38:53.292745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.292994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 [2024-07-22 19:38:53.293084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:34.516 19:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:37.817 19:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:37.817 00:34:37.817 19:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:38.078 [2024-07-22 19:38:56.820887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.820999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.821005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.821011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.821017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.821022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.821029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.078 [2024-07-22 19:38:56.821035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.079 [2024-07-22 19:38:56.821041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.079 [2024-07-22 19:38:56.821052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.079 [2024-07-22 19:38:56.821058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:38.079 19:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:41.395 19:38:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.395 [2024-07-22 19:38:59.999754] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.395 19:39:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:42.360 19:39:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:42.360 [2024-07-22 19:39:01.175565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.360 [2024-07-22 19:39:01.175667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.175998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.361 [2024-07-22 19:39:01.176091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 [2024-07-22 19:39:01.176342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:42.362 19:39:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3108356 00:34:48.950 0 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3108020 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3108020 ']' 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3108020 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3108020 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3108020' 00:34:48.950 killing process with pid 3108020 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3108020 00:34:48.950 19:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3108020 00:34:49.221 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:49.221 [2024-07-22 19:38:50.723962] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:49.222 [2024-07-22 19:38:50.724079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108020 ] 00:34:49.222 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.222 [2024-07-22 19:38:50.834744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.222 [2024-07-22 19:38:51.012395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.222 Running I/O for 15 seconds... 00:34:49.222 [2024-07-22 19:38:53.297018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.222 [2024-07-22 19:38:53.297340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.222 [2024-07-22 19:38:53.297927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.222 [2024-07-22 19:38:53.297937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.297950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.297960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.297972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.297982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.297994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.223 [2024-07-22 19:38:53.298818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.223 [2024-07-22 19:38:53.298861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87280 len:8 PRP1 0x0 PRP2 0x0 00:34:49.223 [2024-07-22 19:38:53.298872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.223 [2024-07-22 19:38:53.298888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.223 [2024-07-22 19:38:53.298901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.298912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87288 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.298924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.298935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.298943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.298951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87296 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.298962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.298972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.298979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.298988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87304 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.298998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87312 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87320 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87328 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87336 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87344 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87352 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87360 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87368 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87376 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87384 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87392 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87400 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87408 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87416 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87424 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87432 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87440 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87448 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87456 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87464 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87472 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87480 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87488 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.224 [2024-07-22 19:38:53.299847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.224 [2024-07-22 19:38:53.299858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.224 [2024-07-22 19:38:53.299867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87496 len:8 PRP1 0x0 PRP2 0x0 00:34:49.224 [2024-07-22 19:38:53.299877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.299887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.299895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.299903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87504 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.299914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.299924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.299932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.299940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87512 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.299952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.299962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.299970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.299979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87520 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.299989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.299998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87528 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87536 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87544 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87552 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87560 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87568 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87576 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87584 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87592 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87600 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87608 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87616 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87624 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87632 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.300511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.300520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.300528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87640 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.300538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.310059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.310073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87648 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.310087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.310107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.310116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87656 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.310127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.310145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.310154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87664 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.310164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.310182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.310190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87672 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.310208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.310226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.310235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87680 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.310245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.225 [2024-07-22 19:38:53.310262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.225 [2024-07-22 19:38:53.310271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87688 len:8 PRP1 0x0 PRP2 0x0 00:34:49.225 [2024-07-22 19:38:53.310281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310495] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000389300 was disconnected and freed. reset controller. 00:34:49.225 [2024-07-22 19:38:53.310512] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:49.225 [2024-07-22 19:38:53.310553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.225 [2024-07-22 19:38:53.310573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.225 [2024-07-22 19:38:53.310603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.225 [2024-07-22 19:38:53.310625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.225 [2024-07-22 19:38:53.310636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.225 [2024-07-22 19:38:53.310646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:53.310656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:49.226 [2024-07-22 19:38:53.310706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:34:49.226 [2024-07-22 19:38:53.314534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:49.226 [2024-07-22 19:38:53.403735] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:49.226 [2024-07-22 19:38:56.822229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.822981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.822991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.823003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.823013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.823025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.823035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.823049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.823060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.823073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.226 [2024-07-22 19:38:56.823083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.226 [2024-07-22 19:38:56.823095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.227 [2024-07-22 19:38:56.823387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.823988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.823998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.824010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.824020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.824032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.824043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.824055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.824065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.824077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.824087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.824099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.824109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.824121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.227 [2024-07-22 19:38:56.824131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.227 [2024-07-22 19:38:56.824144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.228 [2024-07-22 19:38:56.824472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127592 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127600 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127608 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126976 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126984 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126992 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127000 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127008 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127016 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127024 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127032 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127040 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.824961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127048 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.824975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.824985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.824992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.825001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127056 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.825011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.825021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.825028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.825037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127064 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.825047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.825057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.825064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.825075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127072 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.825085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.825095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.825103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.825112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127080 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.825122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.825132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.228 [2024-07-22 19:38:56.825140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.228 [2024-07-22 19:38:56.825148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127088 len:8 PRP1 0x0 PRP2 0x0 00:34:49.228 [2024-07-22 19:38:56.825159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.228 [2024-07-22 19:38:56.825168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127096 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127104 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127112 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127120 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127128 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127136 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127144 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127152 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127160 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127168 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127176 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127184 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127192 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.229 [2024-07-22 19:38:56.825682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.229 [2024-07-22 19:38:56.825690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127200 len:8 PRP1 0x0 PRP2 0x0 00:34:49.229 [2024-07-22 19:38:56.825700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825901] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000389800 was disconnected and freed. reset controller. 00:34:49.229 [2024-07-22 19:38:56.825915] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:49.229 [2024-07-22 19:38:56.825944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.229 [2024-07-22 19:38:56.825957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.229 [2024-07-22 19:38:56.825982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.825993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.229 [2024-07-22 19:38:56.826003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.826013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.229 [2024-07-22 19:38:56.826024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:38:56.826034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:49.229 [2024-07-22 19:38:56.829842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:49.229 [2024-07-22 19:38:56.829890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:34:49.229 [2024-07-22 19:38:56.907526] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:49.229 [2024-07-22 19:39:01.178442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.229 [2024-07-22 19:39:01.178487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:39:01.178513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.229 [2024-07-22 19:39:01.178524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:39:01.178539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.229 [2024-07-22 19:39:01.178549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:39:01.178562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.229 [2024-07-22 19:39:01.178572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:39:01.178585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.229 [2024-07-22 19:39:01.178599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:39:01.178612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.229 [2024-07-22 19:39:01.178622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:39:01.178634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.229 [2024-07-22 19:39:01.178644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.229 [2024-07-22 19:39:01.178657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.178984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.178996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.179006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.179029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.179051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.179073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.179102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.179124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.179147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.230 [2024-07-22 19:39:01.179171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.230 [2024-07-22 19:39:01.179538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.230 [2024-07-22 19:39:01.179548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.179979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.179992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.231 [2024-07-22 19:39:01.180408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.231 [2024-07-22 19:39:01.180420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.232 [2024-07-22 19:39:01.180707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.232 [2024-07-22 19:39:01.180730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.232 [2024-07-22 19:39:01.180752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.232 [2024-07-22 19:39:01.180775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.232 [2024-07-22 19:39:01.180797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.232 [2024-07-22 19:39:01.180819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.232 [2024-07-22 19:39:01.180841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.232 [2024-07-22 19:39:01.180864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.232 [2024-07-22 19:39:01.180886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.180930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103968 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.180942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.180990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.232 [2024-07-22 19:39:01.181005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.232 [2024-07-22 19:39:01.181028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.232 [2024-07-22 19:39:01.181049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:49.232 [2024-07-22 19:39:01.181070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:34:49.232 [2024-07-22 19:39:01.181361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103976 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103984 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103992 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104000 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104008 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104016 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104024 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104032 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104040 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104048 len:8 PRP1 0x0 PRP2 0x0 00:34:49.232 [2024-07-22 19:39:01.181731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.232 [2024-07-22 19:39:01.181741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.232 [2024-07-22 19:39:01.181748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.232 [2024-07-22 19:39:01.181757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104056 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.181767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.181777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.181784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.181793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104064 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.181802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.181814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.181821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.181830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104072 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.181840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.181850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.181857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.181866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104080 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.181876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.181885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.191743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.191781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104088 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.191797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.191817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.191826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.191836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104096 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.191847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.191857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.191865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.191874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104104 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.191884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.191894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.191902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.191911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104112 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.191921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.191931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.191938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.191947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104120 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.191958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.191969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.191976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.191985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104128 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104136 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104144 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103664 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103672 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103680 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103688 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103696 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103704 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103712 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103720 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103728 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103736 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.233 [2024-07-22 19:39:01.192472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.233 [2024-07-22 19:39:01.192480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.233 [2024-07-22 19:39:01.192489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103744 len:8 PRP1 0x0 PRP2 0x0 00:34:49.233 [2024-07-22 19:39:01.192499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103752 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103760 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103768 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103776 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103784 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103792 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103800 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103808 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103816 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103824 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103832 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103840 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103848 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.192973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.192983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.192990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.192999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103856 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.193009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.193019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.193026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.193035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103864 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.193045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.193055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.193062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.193071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103872 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.193081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.193091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.193098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.193107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103880 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.193117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.193126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.193134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.193144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103888 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.193154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.193164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.193171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.193180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103896 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.193191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.193205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.193213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.193222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104152 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.193233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.193243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.193250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.193258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104160 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.193269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.193279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.193286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.234 [2024-07-22 19:39:01.193294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104168 len:8 PRP1 0x0 PRP2 0x0 00:34:49.234 [2024-07-22 19:39:01.193305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.234 [2024-07-22 19:39:01.193314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.234 [2024-07-22 19:39:01.193322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104176 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104184 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104192 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104200 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104208 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104216 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104224 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104232 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104240 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104248 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104256 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104264 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104272 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104280 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104288 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104296 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104304 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104312 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.193972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.193979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.193987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104320 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.193999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.194009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.194016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.194025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104328 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.194035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.194045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.194052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.194061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104336 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.194071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.194081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.194088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.194097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104344 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.194106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.194117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.194124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.194133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104352 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.194143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.194153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.194160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.194169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104360 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.194178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.194188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.194196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.194207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104368 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.194218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.194227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.194235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.194243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104376 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.194253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.194264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.194272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.235 [2024-07-22 19:39:01.194281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104384 len:8 PRP1 0x0 PRP2 0x0 00:34:49.235 [2024-07-22 19:39:01.194291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.235 [2024-07-22 19:39:01.194301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.235 [2024-07-22 19:39:01.194309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.194317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104392 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.194327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.194337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.194345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.194353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104400 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.194388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.194398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.194405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.194414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104408 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.194424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.200849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.200880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.200893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104416 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.200906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.200918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.200925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.200935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104424 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.200945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.200956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.200964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.200981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104432 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.200991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104440 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104448 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104456 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104464 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104472 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104480 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104488 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104496 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104504 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104512 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104520 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104528 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104536 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104544 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104552 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104560 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104568 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104576 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104584 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104592 len:8 PRP1 0x0 PRP2 0x0 00:34:49.236 [2024-07-22 19:39:01.201730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.236 [2024-07-22 19:39:01.201739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.236 [2024-07-22 19:39:01.201747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.236 [2024-07-22 19:39:01.201756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104600 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.201766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.201776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.201783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.201791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104608 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.201802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.201812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.201819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.201828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104616 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.201838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.201848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.201855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.201864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104624 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.201874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.201885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.201893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.201902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104632 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.201912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.201922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.201929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.201938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104640 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.201948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.201957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.201965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.201973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104648 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.201983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.201993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104656 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104664 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104672 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104680 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103904 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103912 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103920 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103928 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103936 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103944 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103952 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103960 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:49.237 [2024-07-22 19:39:01.202445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:49.237 [2024-07-22 19:39:01.202456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103968 len:8 PRP1 0x0 PRP2 0x0 00:34:49.237 [2024-07-22 19:39:01.202466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.237 [2024-07-22 19:39:01.202676] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500038a200 was disconnected and freed. reset controller. 00:34:49.237 [2024-07-22 19:39:01.202692] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:49.237 [2024-07-22 19:39:01.202705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:49.237 [2024-07-22 19:39:01.202767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:34:49.237 [2024-07-22 19:39:01.206590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:49.237 [2024-07-22 19:39:01.376931] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:49.237 00:34:49.237 Latency(us) 00:34:49.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.237 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:49.237 Verification LBA range: start 0x0 length 0x4000 00:34:49.237 NVMe0n1 : 15.01 10064.10 39.31 734.66 0.00 11821.85 853.33 33641.81 00:34:49.237 =================================================================================================================== 00:34:49.237 Total : 10064.10 39.31 734.66 0.00 11821.85 853.33 33641.81 00:34:49.237 Received shutdown signal, test time was about 15.000000 seconds 00:34:49.237 00:34:49.237 Latency(us) 00:34:49.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.237 =================================================================================================================== 00:34:49.237 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3111475 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3111475 /var/tmp/bdevperf.sock 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3111475 ']' 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:49.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:49.237 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:50.179 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:50.179 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:50.179 19:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:50.179 [2024-07-22 19:39:09.029414] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:50.179 19:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:50.439 [2024-07-22 19:39:09.201843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:50.439 19:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:50.700 NVMe0n1 00:34:50.700 19:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:50.961 00:34:50.961 19:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:51.222 00:34:51.222 19:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:51.222 19:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:51.483 19:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:51.483 19:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:54.788 19:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:54.788 19:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:54.788 19:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3112953 00:34:54.788 19:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3112953 00:34:54.788 19:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:55.732 0 00:34:55.993 19:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:55.994 [2024-07-22 19:39:08.146742] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:55.994 [2024-07-22 19:39:08.146862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111475 ] 00:34:55.994 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.994 [2024-07-22 19:39:08.259499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.994 [2024-07-22 19:39:08.436290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.994 [2024-07-22 19:39:10.381921] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:55.994 [2024-07-22 19:39:10.382001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:55.994 [2024-07-22 19:39:10.382019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:55.994 [2024-07-22 19:39:10.382035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:55.994 [2024-07-22 19:39:10.382046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:55.994 [2024-07-22 19:39:10.382058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:55.994 [2024-07-22 19:39:10.382069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:55.994 [2024-07-22 19:39:10.382080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:55.994 [2024-07-22 19:39:10.382090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:55.994 [2024-07-22 19:39:10.382100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:55.994 [2024-07-22 19:39:10.382153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:55.994 [2024-07-22 19:39:10.382182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:34:55.994 [2024-07-22 19:39:10.432688] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:55.994 Running I/O for 1 seconds... 00:34:55.994 00:34:55.994 Latency(us) 00:34:55.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.994 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:55.994 Verification LBA range: start 0x0 length 0x4000 00:34:55.994 NVMe0n1 : 1.01 10560.98 41.25 0.00 0.00 12059.60 2949.12 11141.12 00:34:55.994 =================================================================================================================== 00:34:55.994 Total : 10560.98 41.25 0.00 0.00 12059.60 2949.12 11141.12 00:34:55.994 19:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:55.994 19:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:55.994 19:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:56.255 19:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:56.255 19:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:56.255 19:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:56.516 19:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3111475 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3111475 ']' 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3111475 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3111475 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3111475' 00:34:59.843 killing process with pid 3111475 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3111475 00:34:59.843 19:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3111475 00:35:00.417 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:35:00.417 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:00.678 rmmod nvme_tcp 00:35:00.678 rmmod nvme_fabrics 00:35:00.678 rmmod nvme_keyring 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3107655 ']' 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3107655 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3107655 ']' 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3107655 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3107655 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3107655' 00:35:00.678 killing process with pid 3107655 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3107655 00:35:00.678 19:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3107655 00:35:01.621 19:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:01.621 19:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:01.621 19:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:01.621 19:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:01.621 19:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:01.621 19:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.621 19:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.621 19:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.534 19:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:03.534 00:35:03.534 real 0m41.096s 00:35:03.534 user 2m6.748s 00:35:03.534 sys 0m8.316s 00:35:03.534 19:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:03.534 19:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:03.534 ************************************ 00:35:03.534 END TEST nvmf_failover 00:35:03.534 ************************************ 00:35:03.534 19:39:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:35:03.534 19:39:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:03.534 19:39:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:03.534 19:39:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:03.534 19:39:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.534 ************************************ 00:35:03.534 START TEST nvmf_host_discovery 00:35:03.534 ************************************ 00:35:03.534 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:03.796 * Looking for test storage... 00:35:03.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:35:03.796 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:35:03.797 19:39:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:10.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:10.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:10.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:10.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:10.429 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:10.430 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:10.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:10.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:35:10.689 00:35:10.689 --- 10.0.0.2 ping statistics --- 00:35:10.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.689 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:10.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:10.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:35:10.689 00:35:10.689 --- 10.0.0.1 ping statistics --- 00:35:10.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.689 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.689 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3118283 00:35:10.690 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3118283 00:35:10.690 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3118283 ']' 00:35:10.690 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.690 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:10.690 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.690 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:10.690 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.690 19:39:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:10.690 [2024-07-22 19:39:29.639779] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:10.690 [2024-07-22 19:39:29.639901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.950 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.950 [2024-07-22 19:39:29.788655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.211 [2024-07-22 19:39:29.985475] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.211 [2024-07-22 19:39:29.985516] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.211 [2024-07-22 19:39:29.985529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.211 [2024-07-22 19:39:29.985539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.211 [2024-07-22 19:39:29.985549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.211 [2024-07-22 19:39:29.985575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.471 [2024-07-22 19:39:30.411467] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.471 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.471 [2024-07-22 19:39:30.423632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.733 null0 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.733 null1 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3118506 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3118506 /tmp/host.sock 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3118506 ']' 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:11.733 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:11.733 19:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.733 [2024-07-22 19:39:30.541069] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:11.733 [2024-07-22 19:39:30.541172] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3118506 ] 00:35:11.733 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.733 [2024-07-22 19:39:30.649748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.995 [2024-07-22 19:39:30.827979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.566 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:12.567 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.827 [2024-07-22 19:39:31.642870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:12.827 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.828 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:35:13.088 19:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:35:13.658 [2024-07-22 19:39:32.341028] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:13.658 [2024-07-22 19:39:32.341063] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:13.658 [2024-07-22 19:39:32.341091] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:13.658 [2024-07-22 19:39:32.428379] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:13.658 [2024-07-22 19:39:32.533661] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:13.658 [2024-07-22 19:39:32.533696] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:13.918 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:13.918 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:13.918 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:13.918 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:13.918 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:13.918 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.918 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:13.918 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.918 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:14.179 19:39:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:14.179 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:14.180 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.440 [2024-07-22 19:39:33.292236] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:14.440 [2024-07-22 19:39:33.293030] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:14.440 [2024-07-22 19:39:33.293077] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:14.440 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:14.441 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.441 [2024-07-22 19:39:33.380924] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.701 [2024-07-22 19:39:33.444828] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:14.701 [2024-07-22 19:39:33.444859] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:14.701 [2024-07-22 19:39:33.444869] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:35:14.701 19:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.644 [2024-07-22 19:39:34.564249] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:15.644 [2024-07-22 19:39:34.564284] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:15.644 [2024-07-22 19:39:34.564567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.644 [2024-07-22 19:39:34.564591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.644 [2024-07-22 19:39:34.564605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.644 [2024-07-22 19:39:34.564616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.644 [2024-07-22 19:39:34.564627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.644 [2024-07-22 19:39:34.564637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.644 [2024-07-22 19:39:34.564648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.644 [2024-07-22 19:39:34.564663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.644 [2024-07-22 19:39:34.564674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:15.644 [2024-07-22 19:39:34.574577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.644 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:15.644 [2024-07-22 19:39:34.584621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:15.644 [2024-07-22 19:39:34.585036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.644 [2024-07-22 19:39:34.585061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:15.644 [2024-07-22 19:39:34.585074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:15.644 [2024-07-22 19:39:34.585092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:15.644 [2024-07-22 19:39:34.585108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:15.644 [2024-07-22 19:39:34.585119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:15.645 [2024-07-22 19:39:34.585134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:15.645 [2024-07-22 19:39:34.585152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.645 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.645 [2024-07-22 19:39:34.594702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:15.645 [2024-07-22 19:39:34.595116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.645 [2024-07-22 19:39:34.595138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:15.645 [2024-07-22 19:39:34.595149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:15.645 [2024-07-22 19:39:34.595166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:15.645 [2024-07-22 19:39:34.595181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:15.645 [2024-07-22 19:39:34.595193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:15.645 [2024-07-22 19:39:34.595209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:15.645 [2024-07-22 19:39:34.595225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.906 [2024-07-22 19:39:34.604781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:15.906 [2024-07-22 19:39:34.605167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.906 [2024-07-22 19:39:34.605185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:15.906 [2024-07-22 19:39:34.605196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:15.906 [2024-07-22 19:39:34.605219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:15.906 [2024-07-22 19:39:34.605233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:15.906 [2024-07-22 19:39:34.605242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:15.906 [2024-07-22 19:39:34.605252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:15.906 [2024-07-22 19:39:34.605266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.906 [2024-07-22 19:39:34.614854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:15.906 [2024-07-22 19:39:34.615416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.906 [2024-07-22 19:39:34.615461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:15.906 [2024-07-22 19:39:34.615477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:15.906 [2024-07-22 19:39:34.615503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:15.906 [2024-07-22 19:39:34.615541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:15.906 [2024-07-22 19:39:34.615555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:15.906 [2024-07-22 19:39:34.615566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:15.906 [2024-07-22 19:39:34.615587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.906 [2024-07-22 19:39:34.624932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:15.906 [2024-07-22 19:39:34.625470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.906 [2024-07-22 19:39:34.625515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:15.906 [2024-07-22 19:39:34.625530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:15.906 [2024-07-22 19:39:34.625557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:15.906 [2024-07-22 19:39:34.625606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:15.906 [2024-07-22 19:39:34.625622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:15.906 [2024-07-22 19:39:34.625634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:15.906 [2024-07-22 19:39:34.625655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.906 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:15.906 [2024-07-22 19:39:34.635008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:15.906 [2024-07-22 19:39:34.635517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.906 [2024-07-22 19:39:34.635563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:15.906 [2024-07-22 19:39:34.635577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:15.906 [2024-07-22 19:39:34.635604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:15.906 [2024-07-22 19:39:34.635641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:15.906 [2024-07-22 19:39:34.635655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:15.906 [2024-07-22 19:39:34.635667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:15.906 [2024-07-22 19:39:34.635688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.906 [2024-07-22 19:39:34.645087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:15.906 [2024-07-22 19:39:34.645488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.906 [2024-07-22 19:39:34.645510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:15.906 [2024-07-22 19:39:34.645521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:15.906 [2024-07-22 19:39:34.645538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:15.906 [2024-07-22 19:39:34.645552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:15.906 [2024-07-22 19:39:34.645561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:15.906 [2024-07-22 19:39:34.645570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:15.906 [2024-07-22 19:39:34.645586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.907 [2024-07-22 19:39:34.650807] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:35:15.907 [2024-07-22 19:39:34.650839] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:15.907 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.168 19:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.109 [2024-07-22 19:39:36.020476] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:17.109 [2024-07-22 19:39:36.020502] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:17.109 [2024-07-22 19:39:36.020530] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:17.370 [2024-07-22 19:39:36.108838] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:35:17.370 [2024-07-22 19:39:36.175995] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:17.370 [2024-07-22 19:39:36.176036] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.370 request: 00:35:17.370 { 00:35:17.370 "name": "nvme", 00:35:17.370 "trtype": "tcp", 00:35:17.370 "traddr": "10.0.0.2", 00:35:17.370 "adrfam": "ipv4", 00:35:17.370 "trsvcid": "8009", 00:35:17.370 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:17.370 "wait_for_attach": true, 00:35:17.370 "method": "bdev_nvme_start_discovery", 00:35:17.370 "req_id": 1 00:35:17.370 } 00:35:17.370 Got JSON-RPC error response 00:35:17.370 response: 00:35:17.370 { 00:35:17.370 "code": -17, 00:35:17.370 "message": "File exists" 00:35:17.370 } 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.370 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.370 request: 00:35:17.370 { 00:35:17.370 "name": "nvme_second", 00:35:17.370 "trtype": "tcp", 00:35:17.370 "traddr": "10.0.0.2", 00:35:17.370 "adrfam": "ipv4", 00:35:17.370 "trsvcid": "8009", 00:35:17.370 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:17.370 "wait_for_attach": true, 00:35:17.370 "method": "bdev_nvme_start_discovery", 00:35:17.370 "req_id": 1 00:35:17.370 } 00:35:17.370 Got JSON-RPC error response 00:35:17.370 response: 00:35:17.370 { 00:35:17.370 "code": -17, 00:35:17.632 "message": "File exists" 00:35:17.632 } 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.632 19:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.575 [2024-07-22 19:39:37.443760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.575 [2024-07-22 19:39:37.443799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000389d00 with addr=10.0.0.2, port=8010 00:35:18.575 [2024-07-22 19:39:37.443841] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:18.575 [2024-07-22 19:39:37.443853] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:18.575 [2024-07-22 19:39:37.443864] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:19.521 [2024-07-22 19:39:38.446112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.521 [2024-07-22 19:39:38.446143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000389f80 with addr=10.0.0.2, port=8010 00:35:19.521 [2024-07-22 19:39:38.446185] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:19.521 [2024-07-22 19:39:38.446196] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:19.521 [2024-07-22 19:39:38.446212] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:20.905 [2024-07-22 19:39:39.447974] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:35:20.905 request: 00:35:20.905 { 00:35:20.905 "name": "nvme_second", 00:35:20.905 "trtype": "tcp", 00:35:20.905 "traddr": "10.0.0.2", 00:35:20.905 "adrfam": "ipv4", 00:35:20.905 "trsvcid": "8010", 00:35:20.905 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:20.905 "wait_for_attach": false, 00:35:20.905 "attach_timeout_ms": 3000, 00:35:20.905 "method": "bdev_nvme_start_discovery", 00:35:20.905 "req_id": 1 00:35:20.905 } 00:35:20.905 Got JSON-RPC error response 00:35:20.905 response: 00:35:20.905 { 00:35:20.905 "code": -110, 00:35:20.905 "message": "Connection timed out" 00:35:20.905 } 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3118506 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:20.905 rmmod nvme_tcp 00:35:20.905 rmmod nvme_fabrics 00:35:20.905 rmmod nvme_keyring 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3118283 ']' 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3118283 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3118283 ']' 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3118283 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3118283 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3118283' 00:35:20.905 killing process with pid 3118283 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3118283 00:35:20.905 19:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3118283 00:35:21.477 19:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:21.477 19:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:21.477 19:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:21.477 19:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:21.477 19:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:21.477 19:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.477 19:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.478 19:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.393 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:23.393 00:35:23.393 real 0m19.868s 00:35:23.393 user 0m23.834s 00:35:23.393 sys 0m6.735s 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.656 ************************************ 00:35:23.656 END TEST nvmf_host_discovery 00:35:23.656 ************************************ 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.656 ************************************ 00:35:23.656 START TEST nvmf_host_multipath_status 00:35:23.656 ************************************ 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:23.656 * Looking for test storage... 00:35:23.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:35:23.656 19:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:31.801 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:31.802 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:31.802 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:31.802 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:31.802 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:31.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:35:31.802 00:35:31.802 --- 10.0.0.2 ping statistics --- 00:35:31.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.802 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:31.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:35:31.802 00:35:31.802 --- 10.0.0.1 ping statistics --- 00:35:31.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.802 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3124484 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3124484 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3124484 ']' 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:31.802 19:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:31.802 [2024-07-22 19:39:49.910658] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:31.803 [2024-07-22 19:39:49.910785] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.803 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.803 [2024-07-22 19:39:50.048333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:31.803 [2024-07-22 19:39:50.233157] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.803 [2024-07-22 19:39:50.233210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.803 [2024-07-22 19:39:50.233223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.803 [2024-07-22 19:39:50.233233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.803 [2024-07-22 19:39:50.233243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.803 [2024-07-22 19:39:50.233323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.803 [2024-07-22 19:39:50.233349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.803 19:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:31.803 19:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:35:31.803 19:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:31.803 19:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:31.803 19:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:31.803 19:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.803 19:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3124484 00:35:31.803 19:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:32.063 [2024-07-22 19:39:50.826854] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.063 19:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:32.335 Malloc0 00:35:32.335 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:32.335 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:32.646 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.646 [2024-07-22 19:39:51.498925] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.646 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:32.907 [2024-07-22 19:39:51.655297] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3124848 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3124848 /var/tmp/bdevperf.sock 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3124848 ']' 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:32.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:32.907 19:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:33.848 19:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:33.848 19:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:35:33.848 19:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:33.848 19:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:35:34.109 Nvme0n1 00:35:34.109 19:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:34.679 Nvme0n1 00:35:34.679 19:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:34.679 19:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:36.592 19:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:36.592 19:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:36.854 19:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:36.854 19:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:37.795 19:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:37.795 19:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:37.795 19:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.795 19:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:38.056 19:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:38.056 19:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:38.056 19:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.056 19:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:38.317 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:38.317 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:38.317 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.317 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:38.317 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:38.317 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:38.317 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.317 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:38.578 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:38.578 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:38.578 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.578 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:38.839 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:38.839 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:38.839 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.839 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:38.839 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:38.839 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:38.839 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:39.100 19:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:39.361 19:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:40.302 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:40.302 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:40.302 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.302 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:40.302 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:40.302 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:40.302 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.302 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:40.563 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:40.563 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:40.563 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.563 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:40.824 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:40.824 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:40.824 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.824 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:40.824 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:40.824 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:40.824 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:40.824 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:41.085 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:41.085 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:41.085 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:41.085 19:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:41.347 19:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:41.347 19:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:41.347 19:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:41.347 19:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:41.608 19:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:42.550 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:42.550 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:42.550 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.550 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:42.811 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:42.811 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:42.811 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.811 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:43.072 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:43.072 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:43.072 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.072 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:43.072 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:43.072 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:43.072 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.072 19:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:43.333 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:43.333 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:43.333 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.333 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:43.594 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:43.594 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:43.594 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.594 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:43.594 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:43.594 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:43.594 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:43.855 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:44.116 19:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:45.058 19:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:45.058 19:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:45.058 19:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.059 19:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:45.319 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.319 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:45.320 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.320 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:45.320 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:45.320 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:45.320 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.320 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:45.581 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.581 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:45.581 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.581 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:45.581 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.581 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:45.581 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.581 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:45.842 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.842 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:45.842 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.842 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:46.103 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:46.103 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:46.103 19:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:46.103 19:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:46.364 19:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:47.305 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:47.305 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:47.305 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.305 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:47.565 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:47.565 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:47.565 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.565 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:47.826 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:47.826 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:47.826 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.826 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:47.826 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:47.826 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:47.826 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.826 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:48.088 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.088 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:48.088 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.088 19:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:48.088 19:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:48.088 19:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:48.088 19:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.088 19:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:48.349 19:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:48.349 19:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:48.349 19:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:48.609 19:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:48.609 19:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.994 19:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:50.255 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:50.255 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:50.255 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.255 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:50.516 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:50.516 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:50.516 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.516 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:50.516 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:50.516 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:50.516 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.516 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:50.816 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:50.816 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:50.816 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:50.816 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:51.104 19:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:51.104 19:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.487 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:52.748 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.749 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:52.749 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.749 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:53.009 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.009 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:53.009 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.009 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:53.009 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.009 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:53.009 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.009 19:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:53.270 19:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.270 19:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:53.270 19:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:53.531 19:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:53.531 19:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:54.470 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:54.470 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:54.731 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.731 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:54.731 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:54.731 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:54.731 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:54.731 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.991 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:54.991 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:54.991 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.991 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:55.250 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:55.250 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:55.250 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:55.250 19:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:55.250 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:55.250 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:55.250 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:55.250 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:55.509 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:55.509 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:55.509 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:55.509 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:55.509 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:55.509 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:55.509 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:55.770 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:56.032 19:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:56.973 19:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:56.973 19:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:56.973 19:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.973 19:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:57.233 19:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.233 19:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:57.233 19:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.233 19:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:57.233 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.233 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:57.233 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.233 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:57.492 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.492 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:57.492 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.493 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:57.752 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.752 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:57.752 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.752 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:57.752 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.752 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:57.752 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.752 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:58.012 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.012 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:58.012 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:58.273 19:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:58.273 19:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:59.215 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:59.215 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:59.215 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.215 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:59.477 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:59.477 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:59.477 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.477 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:59.738 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:59.738 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:59.738 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.738 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:59.738 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:59.738 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:59.738 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.738 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:00.000 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.000 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:00.000 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.000 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:00.000 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.000 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:00.000 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:00.000 19:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3124848 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3124848 ']' 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3124848 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3124848 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3124848' 00:36:00.261 killing process with pid 3124848 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3124848 00:36:00.261 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3124848 00:36:00.522 Connection closed with partial response: 00:36:00.522 00:36:00.522 00:36:00.807 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3124848 00:36:00.807 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:00.807 [2024-07-22 19:39:51.771129] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:00.807 [2024-07-22 19:39:51.771250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124848 ] 00:36:00.807 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.807 [2024-07-22 19:39:51.866879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.807 [2024-07-22 19:39:52.002163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:00.807 Running I/O for 90 seconds... 00:36:00.807 [2024-07-22 19:40:05.012511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.807 [2024-07-22 19:40:05.012555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.807 [2024-07-22 19:40:05.012582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.807 [2024-07-22 19:40:05.012590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.807 [2024-07-22 19:40:05.012605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.012613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.012626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.012634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.012654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.012667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.012675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.012688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.012695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.012709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.012716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.013934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.013941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.808 [2024-07-22 19:40:05.014642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.808 [2024-07-22 19:40:05.014655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.014663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.014676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.014683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.014696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.014704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.014717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.014724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.014737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.014745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.014758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.014766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.014779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.014788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.809 [2024-07-22 19:40:05.015075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.809 [2024-07-22 19:40:05.015095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.809 [2024-07-22 19:40:05.015116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.809 [2024-07-22 19:40:05.015142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.809 [2024-07-22 19:40:05.015163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.809 [2024-07-22 19:40:05.015184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.809 [2024-07-22 19:40:05.015208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.809 [2024-07-22 19:40:05.015229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.809 [2024-07-22 19:40:05.015877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.809 [2024-07-22 19:40:05.015885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.015898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.015905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.015918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.015925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.015938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.015945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.015958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.015981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.015989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.810 [2024-07-22 19:40:05.016255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.810 [2024-07-22 19:40:05.016532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.016922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.016930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.017019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.017028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.017042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.017050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.017063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.017071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.017084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.810 [2024-07-22 19:40:05.017097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.810 [2024-07-22 19:40:05.017110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.017985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.017996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.811 [2024-07-22 19:40:05.018662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.811 [2024-07-22 19:40:05.018669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.018682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.018692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.018706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.018713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.018726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.018733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.018746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.018754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.018770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.018777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.018965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.018976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.018990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.018998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.019682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.812 [2024-07-22 19:40:05.019702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.812 [2024-07-22 19:40:05.019723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.812 [2024-07-22 19:40:05.019744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.812 [2024-07-22 19:40:05.019769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.812 [2024-07-22 19:40:05.019792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.812 [2024-07-22 19:40:05.019813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.812 [2024-07-22 19:40:05.019833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.019846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.812 [2024-07-22 19:40:05.019854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.812 [2024-07-22 19:40:05.020759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.812 [2024-07-22 19:40:05.020776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.020984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.020992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.021557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.021564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.031848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.813 [2024-07-22 19:40:05.031873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.031889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.031898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.031915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.031923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.031937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.031944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.031958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.031965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.031978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.031985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.031999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.032006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.032019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.813 [2024-07-22 19:40:05.032027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.813 [2024-07-22 19:40:05.032040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.814 [2024-07-22 19:40:05.032068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.032988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.032997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.033010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.033017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.033030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.033037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.033051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.033058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.033071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.033078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.033091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.033099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.033112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.033119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.033132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.033140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.033152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.814 [2024-07-22 19:40:05.033159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.814 [2024-07-22 19:40:05.033172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.815 [2024-07-22 19:40:05.033961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.815 [2024-07-22 19:40:05.033973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.033982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.033995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.816 [2024-07-22 19:40:05.034044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.816 [2024-07-22 19:40:05.034064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.816 [2024-07-22 19:40:05.034085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.816 [2024-07-22 19:40:05.034124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.816 [2024-07-22 19:40:05.034145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.816 [2024-07-22 19:40:05.034166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.816 [2024-07-22 19:40:05.034187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.816 [2024-07-22 19:40:05.034213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.034545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.034553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.035418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.035435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.035456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.035464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.035478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.035485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.035498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.035506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.035519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.035527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.816 [2024-07-22 19:40:05.035540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.816 [2024-07-22 19:40:05.035547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.817 [2024-07-22 19:40:05.035753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.035915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.035928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.817 [2024-07-22 19:40:05.035936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.037829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.037837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.038007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.038017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.038032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.038045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.817 [2024-07-22 19:40:05.038058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.817 [2024-07-22 19:40:05.038065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.818 [2024-07-22 19:40:05.038856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.818 [2024-07-22 19:40:05.038864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.044597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.044622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.044642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.044650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.044663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.044671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.044684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.044691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.044705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.044713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.819 [2024-07-22 19:40:05.045541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.819 [2024-07-22 19:40:05.045561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.819 [2024-07-22 19:40:05.045582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.819 [2024-07-22 19:40:05.045607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.819 [2024-07-22 19:40:05.045628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.819 [2024-07-22 19:40:05.045649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.819 [2024-07-22 19:40:05.045670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.819 [2024-07-22 19:40:05.045690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.819 [2024-07-22 19:40:05.045893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.819 [2024-07-22 19:40:05.045906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.045913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.045926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.045933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.045946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.045953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.045966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.045973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.045986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.045993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.820 [2024-07-22 19:40:05.046342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.820 [2024-07-22 19:40:05.046524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.820 [2024-07-22 19:40:05.046680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.820 [2024-07-22 19:40:05.046687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.046700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.046707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.046720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.046727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.046740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.046748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.046761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.046769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.046782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.046790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.046802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.046811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.046825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.046833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.046846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.046853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.047982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.047995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.048003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.048015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.048022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.048035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.048043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.821 [2024-07-22 19:40:05.048056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.821 [2024-07-22 19:40:05.048063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.048986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.048999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.822 [2024-07-22 19:40:05.049251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.822 [2024-07-22 19:40:05.049258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.823 [2024-07-22 19:40:05.049280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.823 [2024-07-22 19:40:05.049300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.823 [2024-07-22 19:40:05.049320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.823 [2024-07-22 19:40:05.049347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.823 [2024-07-22 19:40:05.049367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.823 [2024-07-22 19:40:05.049387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.823 [2024-07-22 19:40:05.049408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.823 [2024-07-22 19:40:05.049428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.049983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.049996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.823 [2024-07-22 19:40:05.050477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.823 [2024-07-22 19:40:05.050484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.824 [2024-07-22 19:40:05.050504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.824 [2024-07-22 19:40:05.050685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.050988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.050995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.824 [2024-07-22 19:40:05.051791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.824 [2024-07-22 19:40:05.051799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.051812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.051819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.051832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.051839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.051852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.051859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.052884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.052892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.053134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.053145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.053159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.053167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.053180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.053188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.053207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.825 [2024-07-22 19:40:05.053214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.825 [2024-07-22 19:40:05.053228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.053631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.053639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.054019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.054044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.054065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.054087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.054107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.054127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.054147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.826 [2024-07-22 19:40:05.054168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.826 [2024-07-22 19:40:05.054188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.826 [2024-07-22 19:40:05.054215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.826 [2024-07-22 19:40:05.054243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.826 [2024-07-22 19:40:05.054264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.826 [2024-07-22 19:40:05.054284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.826 [2024-07-22 19:40:05.054306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.826 [2024-07-22 19:40:05.054327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.054340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.054347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.055219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.055236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.826 [2024-07-22 19:40:05.055252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.826 [2024-07-22 19:40:05.055260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.055989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.055996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.056009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.827 [2024-07-22 19:40:05.056016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.056029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.056036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.056049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.056057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.056070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.056077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.056090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.056099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.056112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.056119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.056132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.056139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.056152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.056160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.827 [2024-07-22 19:40:05.056173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.827 [2024-07-22 19:40:05.056180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.828 [2024-07-22 19:40:05.056206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.056984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.056997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.828 [2024-07-22 19:40:05.057300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.828 [2024-07-22 19:40:05.057307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.057982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.057995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.829 [2024-07-22 19:40:05.058892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.829 [2024-07-22 19:40:05.058905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.058912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.058925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.058932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.058945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.058953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.058966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.058973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.058986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.058994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.830 [2024-07-22 19:40:05.059566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.830 [2024-07-22 19:40:05.059589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.830 [2024-07-22 19:40:05.059610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.830 [2024-07-22 19:40:05.059637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.830 [2024-07-22 19:40:05.059657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.830 [2024-07-22 19:40:05.059678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.830 [2024-07-22 19:40:05.059698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.830 [2024-07-22 19:40:05.059718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.059990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.059997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.060010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.060019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.060032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.060040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.060053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.060060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.060074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.060081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.060187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.060197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.060216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.060224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.830 [2024-07-22 19:40:05.060238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.830 [2024-07-22 19:40:05.060245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.831 [2024-07-22 19:40:05.060915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.060989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.060996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.061096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.061106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.061121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.061128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.061141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.061149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.061161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.061169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.061182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.831 [2024-07-22 19:40:05.061189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.061207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.061215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.061230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.061239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.831 [2024-07-22 19:40:05.061252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.831 [2024-07-22 19:40:05.061260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.061984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.061991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.832 [2024-07-22 19:40:05.062882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.832 [2024-07-22 19:40:05.062890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.062903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.062910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.062924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.062931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.062944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.062951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.062965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.062972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.063983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.063996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.833 [2024-07-22 19:40:05.064820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.833 [2024-07-22 19:40:05.064833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.064840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.064853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.834 [2024-07-22 19:40:05.064860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.064874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.834 [2024-07-22 19:40:05.064881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.064894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.834 [2024-07-22 19:40:05.064901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.064914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.834 [2024-07-22 19:40:05.064927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.064940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.834 [2024-07-22 19:40:05.064947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.064960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.834 [2024-07-22 19:40:05.064967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.064980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.834 [2024-07-22 19:40:05.064988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.834 [2024-07-22 19:40:05.065008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.065023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.065030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.065043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.065050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.065063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.065071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.065083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.065091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.065104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.065111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.834 [2024-07-22 19:40:05.066738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.834 [2024-07-22 19:40:05.066758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.834 [2024-07-22 19:40:05.066771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.835 [2024-07-22 19:40:05.066939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.066980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.066992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.835 [2024-07-22 19:40:05.067779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.835 [2024-07-22 19:40:05.067786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.067799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.067806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.067819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.067826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.067839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.067846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.067859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.067866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.068991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.068999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.069243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.069254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.069268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.069276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.069290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.069297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.069310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.069317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.069331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.069338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.836 [2024-07-22 19:40:05.069351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.836 [2024-07-22 19:40:05.069358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.069768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.069776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.837 [2024-07-22 19:40:05.070151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.837 [2024-07-22 19:40:05.070172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.837 [2024-07-22 19:40:05.070194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.837 [2024-07-22 19:40:05.070225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.837 [2024-07-22 19:40:05.070246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.837 [2024-07-22 19:40:05.070266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.837 [2024-07-22 19:40:05.070286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.837 [2024-07-22 19:40:05.070307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.837 [2024-07-22 19:40:05.070977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.837 [2024-07-22 19:40:05.070984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.070997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.838 [2024-07-22 19:40:05.071414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.838 [2024-07-22 19:40:05.071785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.071988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.071995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.838 [2024-07-22 19:40:05.072696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.838 [2024-07-22 19:40:05.072709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.072716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.072926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.072936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.072953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.072960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.072973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.072981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.072995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.073983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.073996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.074003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.074016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.074024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.074036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.074043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.074057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.074064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.074152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.074161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.074175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.074183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.839 [2024-07-22 19:40:05.074196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.839 [2024-07-22 19:40:05.074208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.074780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.074787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.075228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.075267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.075288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.075309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.075332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.075353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.075373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.840 [2024-07-22 19:40:05.075394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.840 [2024-07-22 19:40:05.075414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.840 [2024-07-22 19:40:05.075434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.840 [2024-07-22 19:40:05.075459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.840 [2024-07-22 19:40:05.075479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.840 [2024-07-22 19:40:05.075499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.840 [2024-07-22 19:40:05.075520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.840 [2024-07-22 19:40:05.075541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.075554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.075561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.840 [2024-07-22 19:40:05.076739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.840 [2024-07-22 19:40:05.076753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.076760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.076774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.076781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.076796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.076803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.076858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.076868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.076883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.076891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.076906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.076914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.076928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.076936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.076951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.076958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.076973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.076980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.076996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.841 [2024-07-22 19:40:05.077161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.841 [2024-07-22 19:40:05.077368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.841 [2024-07-22 19:40:05.077954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.841 [2024-07-22 19:40:05.077962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.077979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.077986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.078984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.078992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.842 [2024-07-22 19:40:05.079838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.842 [2024-07-22 19:40:05.079846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:05.079866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:05.079874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:05.079894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:05.079901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:05.079921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:05.079931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:05.079951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:05.079959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.114798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.114840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.114884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.114893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.114908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.114915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.114930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.114937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.114950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.114958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.115221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.115242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.115267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.115289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.843 [2024-07-22 19:40:17.115963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.115985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.115998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.116005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.116019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.116027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.116043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.116050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.116064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.116071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.116085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.116092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.843 [2024-07-22 19:40:17.116106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.843 [2024-07-22 19:40:17.116113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.843 Received shutdown signal, test time was about 25.649836 seconds 00:36:00.843 00:36:00.843 Latency(us) 00:36:00.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.843 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:00.843 Verification LBA range: start 0x0 length 0x4000 00:36:00.843 Nvme0n1 : 25.65 9935.24 38.81 0.00 0.00 12862.91 464.21 3075822.93 00:36:00.843 =================================================================================================================== 00:36:00.843 Total : 9935.24 38.81 0.00 0.00 12862.91 464.21 3075822.93 00:36:00.843 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:01.105 rmmod nvme_tcp 00:36:01.105 rmmod nvme_fabrics 00:36:01.105 rmmod nvme_keyring 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3124484 ']' 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3124484 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3124484 ']' 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3124484 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3124484 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3124484' 00:36:01.105 killing process with pid 3124484 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3124484 00:36:01.105 19:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3124484 00:36:02.048 19:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:02.048 19:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:02.048 19:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:02.048 19:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:02.048 19:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:02.048 19:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.048 19:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.048 19:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.596 19:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:04.596 00:36:04.596 real 0m40.577s 00:36:04.596 user 1m43.515s 00:36:04.596 sys 0m10.917s 00:36:04.596 19:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:04.596 19:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:04.596 ************************************ 00:36:04.596 END TEST nvmf_host_multipath_status 00:36:04.596 ************************************ 00:36:04.596 19:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:04.596 19:40:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:04.596 19:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:04.596 19:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:04.596 19:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.596 ************************************ 00:36:04.596 START TEST nvmf_discovery_remove_ifc 00:36:04.596 ************************************ 00:36:04.596 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:04.596 * Looking for test storage... 00:36:04.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:36:04.597 19:40:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:11.220 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:11.220 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:11.220 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:11.220 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:11.220 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:11.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:11.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:36:11.482 00:36:11.482 --- 10.0.0.2 ping statistics --- 00:36:11.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.482 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:11.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:11.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:36:11.482 00:36:11.482 --- 10.0.0.1 ping statistics --- 00:36:11.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.482 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:11.482 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3134717 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3134717 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3134717 ']' 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:11.744 19:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:11.744 [2024-07-22 19:40:30.562397] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:11.744 [2024-07-22 19:40:30.562528] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:11.744 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.005 [2024-07-22 19:40:30.711679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.005 [2024-07-22 19:40:30.935996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:12.005 [2024-07-22 19:40:30.936063] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:12.005 [2024-07-22 19:40:30.936078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:12.005 [2024-07-22 19:40:30.936087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:12.005 [2024-07-22 19:40:30.936099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:12.005 [2024-07-22 19:40:30.936134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:12.577 [2024-07-22 19:40:31.356439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:12.577 [2024-07-22 19:40:31.364688] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:12.577 null0 00:36:12.577 [2024-07-22 19:40:31.396630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3134796 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3134796 /tmp/host.sock 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3134796 ']' 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:12.577 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:12.577 19:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:12.577 [2024-07-22 19:40:31.519199] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:12.577 [2024-07-22 19:40:31.519351] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134796 ] 00:36:12.838 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.838 [2024-07-22 19:40:31.647026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.099 [2024-07-22 19:40:31.827236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.360 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:13.621 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.621 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:36:13.621 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.621 19:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:15.006 [2024-07-22 19:40:33.546355] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:15.006 [2024-07-22 19:40:33.546396] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:15.006 [2024-07-22 19:40:33.546424] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:15.006 [2024-07-22 19:40:33.676863] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:15.006 [2024-07-22 19:40:33.861296] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:15.006 [2024-07-22 19:40:33.861359] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:15.006 [2024-07-22 19:40:33.861409] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:15.006 [2024-07-22 19:40:33.861433] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:15.006 [2024-07-22 19:40:33.861466] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.006 [2024-07-22 19:40:33.906289] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x615000388900 was disconnected and freed. delete nvme_qpair. 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:36:15.006 19:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:15.268 19:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:16.209 19:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:17.594 19:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:18.537 19:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:19.478 19:40:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:20.420 [2024-07-22 19:40:39.301744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:36:20.420 [2024-07-22 19:40:39.301809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:20.420 [2024-07-22 19:40:39.301828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:20.420 [2024-07-22 19:40:39.301843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:20.420 [2024-07-22 19:40:39.301854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:20.420 [2024-07-22 19:40:39.301865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:20.421 [2024-07-22 19:40:39.301875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:20.421 [2024-07-22 19:40:39.301886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:20.421 [2024-07-22 19:40:39.301897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:20.421 [2024-07-22 19:40:39.301908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:20.421 [2024-07-22 19:40:39.301923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:20.421 [2024-07-22 19:40:39.301934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:36:20.421 [2024-07-22 19:40:39.311761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:36:20.421 [2024-07-22 19:40:39.321805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:20.421 19:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:20.421 19:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:20.421 19:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:20.421 19:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.421 19:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:20.421 19:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:20.421 19:40:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:21.805 [2024-07-22 19:40:40.337232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:36:21.805 [2024-07-22 19:40:40.337296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:36:21.805 [2024-07-22 19:40:40.337315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:36:21.805 [2024-07-22 19:40:40.337357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:36:21.805 [2024-07-22 19:40:40.337873] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:21.805 [2024-07-22 19:40:40.337900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:21.805 [2024-07-22 19:40:40.337911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:21.805 [2024-07-22 19:40:40.337924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:21.805 [2024-07-22 19:40:40.337950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:21.805 [2024-07-22 19:40:40.337963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:21.805 19:40:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.805 19:40:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:21.805 19:40:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:22.748 [2024-07-22 19:40:41.340364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:22.748 [2024-07-22 19:40:41.340392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:22.748 [2024-07-22 19:40:41.340403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:22.748 [2024-07-22 19:40:41.340413] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:36:22.748 [2024-07-22 19:40:41.340432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:22.748 [2024-07-22 19:40:41.340464] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:36:22.748 [2024-07-22 19:40:41.340503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:22.748 [2024-07-22 19:40:41.340519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:22.748 [2024-07-22 19:40:41.340539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:22.748 [2024-07-22 19:40:41.340550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:22.748 [2024-07-22 19:40:41.340562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:22.748 [2024-07-22 19:40:41.340572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:22.748 [2024-07-22 19:40:41.340583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:22.748 [2024-07-22 19:40:41.340594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:22.748 [2024-07-22 19:40:41.340606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:22.748 [2024-07-22 19:40:41.340616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:22.748 [2024-07-22 19:40:41.340626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:36:22.748 [2024-07-22 19:40:41.340970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:36:22.748 [2024-07-22 19:40:41.341987] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:36:22.748 [2024-07-22 19:40:41.342010] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:22.748 19:40:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:23.692 19:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:24.635 [2024-07-22 19:40:43.403312] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:24.635 [2024-07-22 19:40:43.403337] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:24.635 [2024-07-22 19:40:43.403360] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:24.635 [2024-07-22 19:40:43.531828] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:24.895 19:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:24.895 [2024-07-22 19:40:43.756693] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:24.895 [2024-07-22 19:40:43.756748] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:24.895 [2024-07-22 19:40:43.756793] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:24.895 [2024-07-22 19:40:43.756820] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:36:24.895 [2024-07-22 19:40:43.756836] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:24.895 [2024-07-22 19:40:43.760656] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x615000389300 was disconnected and freed. delete nvme_qpair. 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3134796 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3134796 ']' 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3134796 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:25.836 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3134796 00:36:26.096 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:26.096 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:26.096 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3134796' 00:36:26.096 killing process with pid 3134796 00:36:26.096 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3134796 00:36:26.096 19:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3134796 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:26.667 rmmod nvme_tcp 00:36:26.667 rmmod nvme_fabrics 00:36:26.667 rmmod nvme_keyring 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3134717 ']' 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3134717 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3134717 ']' 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3134717 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:26.667 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3134717 00:36:26.928 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:26.928 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:26.928 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3134717' 00:36:26.928 killing process with pid 3134717 00:36:26.928 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3134717 00:36:26.928 19:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3134717 00:36:27.499 19:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:27.499 19:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:27.499 19:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:27.499 19:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:27.499 19:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:27.499 19:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.499 19:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:27.499 19:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.409 19:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:29.409 00:36:29.409 real 0m25.253s 00:36:29.409 user 0m31.167s 00:36:29.409 sys 0m6.922s 00:36:29.409 19:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:29.409 19:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:29.409 ************************************ 00:36:29.409 END TEST nvmf_discovery_remove_ifc 00:36:29.409 ************************************ 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.670 ************************************ 00:36:29.670 START TEST nvmf_identify_kernel_target 00:36:29.670 ************************************ 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:29.670 * Looking for test storage... 00:36:29.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:29.670 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:36:29.671 19:40:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:36.329 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:36.330 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:36.330 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:36.330 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:36.330 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:36.330 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:36.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:36.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:36:36.592 00:36:36.592 --- 10.0.0.2 ping statistics --- 00:36:36.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.592 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:36.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:36.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:36:36.592 00:36:36.592 --- 10.0.0.1 ping statistics --- 00:36:36.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.592 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:36.592 19:40:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:39.893 Waiting for block devices as requested 00:36:40.154 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:40.154 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:40.154 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:40.415 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:40.415 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:40.415 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:40.415 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:40.675 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:40.675 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:40.936 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:40.936 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:40.936 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:41.196 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:41.196 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:41.196 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:41.196 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:41.456 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:41.717 No valid GPT data, bailing 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:41.717 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:36:41.717 00:36:41.717 Discovery Log Number of Records 2, Generation counter 2 00:36:41.717 =====Discovery Log Entry 0====== 00:36:41.717 trtype: tcp 00:36:41.717 adrfam: ipv4 00:36:41.717 subtype: current discovery subsystem 00:36:41.717 treq: not specified, sq flow control disable supported 00:36:41.717 portid: 1 00:36:41.717 trsvcid: 4420 00:36:41.717 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:41.717 traddr: 10.0.0.1 00:36:41.717 eflags: none 00:36:41.717 sectype: none 00:36:41.717 =====Discovery Log Entry 1====== 00:36:41.717 trtype: tcp 00:36:41.717 adrfam: ipv4 00:36:41.717 subtype: nvme subsystem 00:36:41.717 treq: not specified, sq flow control disable supported 00:36:41.717 portid: 1 00:36:41.717 trsvcid: 4420 00:36:41.717 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:41.717 traddr: 10.0.0.1 00:36:41.717 eflags: none 00:36:41.717 sectype: none 00:36:41.718 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:41.718 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:41.980 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.980 ===================================================== 00:36:41.980 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:41.980 ===================================================== 00:36:41.980 Controller Capabilities/Features 00:36:41.980 ================================ 00:36:41.980 Vendor ID: 0000 00:36:41.980 Subsystem Vendor ID: 0000 00:36:41.980 Serial Number: 47f0cc283946248f8206 00:36:41.980 Model Number: Linux 00:36:41.980 Firmware Version: 6.7.0-68 00:36:41.980 Recommended Arb Burst: 0 00:36:41.980 IEEE OUI Identifier: 00 00 00 00:36:41.980 Multi-path I/O 00:36:41.980 May have multiple subsystem ports: No 00:36:41.980 May have multiple controllers: No 00:36:41.980 Associated with SR-IOV VF: No 00:36:41.980 Max Data Transfer Size: Unlimited 00:36:41.980 Max Number of Namespaces: 0 00:36:41.980 Max Number of I/O Queues: 1024 00:36:41.980 NVMe Specification Version (VS): 1.3 00:36:41.980 NVMe Specification Version (Identify): 1.3 00:36:41.980 Maximum Queue Entries: 1024 00:36:41.980 Contiguous Queues Required: No 00:36:41.980 Arbitration Mechanisms Supported 00:36:41.980 Weighted Round Robin: Not Supported 00:36:41.980 Vendor Specific: Not Supported 00:36:41.980 Reset Timeout: 7500 ms 00:36:41.980 Doorbell Stride: 4 bytes 00:36:41.980 NVM Subsystem Reset: Not Supported 00:36:41.980 Command Sets Supported 00:36:41.980 NVM Command Set: Supported 00:36:41.980 Boot Partition: Not Supported 00:36:41.980 Memory Page Size Minimum: 4096 bytes 00:36:41.980 Memory Page Size Maximum: 4096 bytes 00:36:41.980 Persistent Memory Region: Not Supported 00:36:41.980 Optional Asynchronous Events Supported 00:36:41.980 Namespace Attribute Notices: Not Supported 00:36:41.980 Firmware Activation Notices: Not Supported 00:36:41.980 ANA Change Notices: Not Supported 00:36:41.980 PLE Aggregate Log Change Notices: Not Supported 00:36:41.980 LBA Status Info Alert Notices: Not Supported 00:36:41.980 EGE Aggregate Log Change Notices: Not Supported 00:36:41.980 Normal NVM Subsystem Shutdown event: Not Supported 00:36:41.980 Zone Descriptor Change Notices: Not Supported 00:36:41.980 Discovery Log Change Notices: Supported 00:36:41.980 Controller Attributes 00:36:41.980 128-bit Host Identifier: Not Supported 00:36:41.980 Non-Operational Permissive Mode: Not Supported 00:36:41.980 NVM Sets: Not Supported 00:36:41.980 Read Recovery Levels: Not Supported 00:36:41.980 Endurance Groups: Not Supported 00:36:41.980 Predictable Latency Mode: Not Supported 00:36:41.980 Traffic Based Keep ALive: Not Supported 00:36:41.980 Namespace Granularity: Not Supported 00:36:41.980 SQ Associations: Not Supported 00:36:41.980 UUID List: Not Supported 00:36:41.980 Multi-Domain Subsystem: Not Supported 00:36:41.980 Fixed Capacity Management: Not Supported 00:36:41.980 Variable Capacity Management: Not Supported 00:36:41.980 Delete Endurance Group: Not Supported 00:36:41.980 Delete NVM Set: Not Supported 00:36:41.980 Extended LBA Formats Supported: Not Supported 00:36:41.980 Flexible Data Placement Supported: Not Supported 00:36:41.980 00:36:41.980 Controller Memory Buffer Support 00:36:41.980 ================================ 00:36:41.980 Supported: No 00:36:41.980 00:36:41.980 Persistent Memory Region Support 00:36:41.980 ================================ 00:36:41.980 Supported: No 00:36:41.980 00:36:41.980 Admin Command Set Attributes 00:36:41.980 ============================ 00:36:41.980 Security Send/Receive: Not Supported 00:36:41.980 Format NVM: Not Supported 00:36:41.980 Firmware Activate/Download: Not Supported 00:36:41.980 Namespace Management: Not Supported 00:36:41.980 Device Self-Test: Not Supported 00:36:41.980 Directives: Not Supported 00:36:41.980 NVMe-MI: Not Supported 00:36:41.980 Virtualization Management: Not Supported 00:36:41.980 Doorbell Buffer Config: Not Supported 00:36:41.980 Get LBA Status Capability: Not Supported 00:36:41.980 Command & Feature Lockdown Capability: Not Supported 00:36:41.980 Abort Command Limit: 1 00:36:41.980 Async Event Request Limit: 1 00:36:41.980 Number of Firmware Slots: N/A 00:36:41.980 Firmware Slot 1 Read-Only: N/A 00:36:41.980 Firmware Activation Without Reset: N/A 00:36:41.980 Multiple Update Detection Support: N/A 00:36:41.980 Firmware Update Granularity: No Information Provided 00:36:41.980 Per-Namespace SMART Log: No 00:36:41.980 Asymmetric Namespace Access Log Page: Not Supported 00:36:41.980 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:41.980 Command Effects Log Page: Not Supported 00:36:41.980 Get Log Page Extended Data: Supported 00:36:41.980 Telemetry Log Pages: Not Supported 00:36:41.980 Persistent Event Log Pages: Not Supported 00:36:41.980 Supported Log Pages Log Page: May Support 00:36:41.980 Commands Supported & Effects Log Page: Not Supported 00:36:41.980 Feature Identifiers & Effects Log Page:May Support 00:36:41.980 NVMe-MI Commands & Effects Log Page: May Support 00:36:41.980 Data Area 4 for Telemetry Log: Not Supported 00:36:41.980 Error Log Page Entries Supported: 1 00:36:41.980 Keep Alive: Not Supported 00:36:41.980 00:36:41.980 NVM Command Set Attributes 00:36:41.980 ========================== 00:36:41.980 Submission Queue Entry Size 00:36:41.980 Max: 1 00:36:41.980 Min: 1 00:36:41.980 Completion Queue Entry Size 00:36:41.980 Max: 1 00:36:41.980 Min: 1 00:36:41.980 Number of Namespaces: 0 00:36:41.980 Compare Command: Not Supported 00:36:41.980 Write Uncorrectable Command: Not Supported 00:36:41.980 Dataset Management Command: Not Supported 00:36:41.980 Write Zeroes Command: Not Supported 00:36:41.980 Set Features Save Field: Not Supported 00:36:41.980 Reservations: Not Supported 00:36:41.980 Timestamp: Not Supported 00:36:41.980 Copy: Not Supported 00:36:41.980 Volatile Write Cache: Not Present 00:36:41.981 Atomic Write Unit (Normal): 1 00:36:41.981 Atomic Write Unit (PFail): 1 00:36:41.981 Atomic Compare & Write Unit: 1 00:36:41.981 Fused Compare & Write: Not Supported 00:36:41.981 Scatter-Gather List 00:36:41.981 SGL Command Set: Supported 00:36:41.981 SGL Keyed: Not Supported 00:36:41.981 SGL Bit Bucket Descriptor: Not Supported 00:36:41.981 SGL Metadata Pointer: Not Supported 00:36:41.981 Oversized SGL: Not Supported 00:36:41.981 SGL Metadata Address: Not Supported 00:36:41.981 SGL Offset: Supported 00:36:41.981 Transport SGL Data Block: Not Supported 00:36:41.981 Replay Protected Memory Block: Not Supported 00:36:41.981 00:36:41.981 Firmware Slot Information 00:36:41.981 ========================= 00:36:41.981 Active slot: 0 00:36:41.981 00:36:41.981 00:36:41.981 Error Log 00:36:41.981 ========= 00:36:41.981 00:36:41.981 Active Namespaces 00:36:41.981 ================= 00:36:41.981 Discovery Log Page 00:36:41.981 ================== 00:36:41.981 Generation Counter: 2 00:36:41.981 Number of Records: 2 00:36:41.981 Record Format: 0 00:36:41.981 00:36:41.981 Discovery Log Entry 0 00:36:41.981 ---------------------- 00:36:41.981 Transport Type: 3 (TCP) 00:36:41.981 Address Family: 1 (IPv4) 00:36:41.981 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:41.981 Entry Flags: 00:36:41.981 Duplicate Returned Information: 0 00:36:41.981 Explicit Persistent Connection Support for Discovery: 0 00:36:41.981 Transport Requirements: 00:36:41.981 Secure Channel: Not Specified 00:36:41.981 Port ID: 1 (0x0001) 00:36:41.981 Controller ID: 65535 (0xffff) 00:36:41.981 Admin Max SQ Size: 32 00:36:41.981 Transport Service Identifier: 4420 00:36:41.981 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:41.981 Transport Address: 10.0.0.1 00:36:41.981 Discovery Log Entry 1 00:36:41.981 ---------------------- 00:36:41.981 Transport Type: 3 (TCP) 00:36:41.981 Address Family: 1 (IPv4) 00:36:41.981 Subsystem Type: 2 (NVM Subsystem) 00:36:41.981 Entry Flags: 00:36:41.981 Duplicate Returned Information: 0 00:36:41.981 Explicit Persistent Connection Support for Discovery: 0 00:36:41.981 Transport Requirements: 00:36:41.981 Secure Channel: Not Specified 00:36:41.981 Port ID: 1 (0x0001) 00:36:41.981 Controller ID: 65535 (0xffff) 00:36:41.981 Admin Max SQ Size: 32 00:36:41.981 Transport Service Identifier: 4420 00:36:41.981 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:41.981 Transport Address: 10.0.0.1 00:36:41.981 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.981 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.981 get_feature(0x01) failed 00:36:41.981 get_feature(0x02) failed 00:36:41.981 get_feature(0x04) failed 00:36:41.981 ===================================================== 00:36:41.981 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:41.981 ===================================================== 00:36:41.981 Controller Capabilities/Features 00:36:41.981 ================================ 00:36:41.981 Vendor ID: 0000 00:36:41.981 Subsystem Vendor ID: 0000 00:36:41.981 Serial Number: 84024cf49f6da454eca5 00:36:41.981 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:41.981 Firmware Version: 6.7.0-68 00:36:41.981 Recommended Arb Burst: 6 00:36:41.981 IEEE OUI Identifier: 00 00 00 00:36:41.981 Multi-path I/O 00:36:41.981 May have multiple subsystem ports: Yes 00:36:41.981 May have multiple controllers: Yes 00:36:41.981 Associated with SR-IOV VF: No 00:36:41.981 Max Data Transfer Size: Unlimited 00:36:41.981 Max Number of Namespaces: 1024 00:36:41.981 Max Number of I/O Queues: 128 00:36:41.981 NVMe Specification Version (VS): 1.3 00:36:41.981 NVMe Specification Version (Identify): 1.3 00:36:41.981 Maximum Queue Entries: 1024 00:36:41.981 Contiguous Queues Required: No 00:36:41.981 Arbitration Mechanisms Supported 00:36:41.981 Weighted Round Robin: Not Supported 00:36:41.981 Vendor Specific: Not Supported 00:36:41.981 Reset Timeout: 7500 ms 00:36:41.981 Doorbell Stride: 4 bytes 00:36:41.981 NVM Subsystem Reset: Not Supported 00:36:41.981 Command Sets Supported 00:36:41.981 NVM Command Set: Supported 00:36:41.981 Boot Partition: Not Supported 00:36:41.981 Memory Page Size Minimum: 4096 bytes 00:36:41.981 Memory Page Size Maximum: 4096 bytes 00:36:41.981 Persistent Memory Region: Not Supported 00:36:41.981 Optional Asynchronous Events Supported 00:36:41.981 Namespace Attribute Notices: Supported 00:36:41.981 Firmware Activation Notices: Not Supported 00:36:41.981 ANA Change Notices: Supported 00:36:41.981 PLE Aggregate Log Change Notices: Not Supported 00:36:41.981 LBA Status Info Alert Notices: Not Supported 00:36:41.981 EGE Aggregate Log Change Notices: Not Supported 00:36:41.981 Normal NVM Subsystem Shutdown event: Not Supported 00:36:41.981 Zone Descriptor Change Notices: Not Supported 00:36:41.981 Discovery Log Change Notices: Not Supported 00:36:41.981 Controller Attributes 00:36:41.981 128-bit Host Identifier: Supported 00:36:41.981 Non-Operational Permissive Mode: Not Supported 00:36:41.981 NVM Sets: Not Supported 00:36:41.981 Read Recovery Levels: Not Supported 00:36:41.981 Endurance Groups: Not Supported 00:36:41.981 Predictable Latency Mode: Not Supported 00:36:41.981 Traffic Based Keep ALive: Supported 00:36:41.981 Namespace Granularity: Not Supported 00:36:41.981 SQ Associations: Not Supported 00:36:41.981 UUID List: Not Supported 00:36:41.981 Multi-Domain Subsystem: Not Supported 00:36:41.981 Fixed Capacity Management: Not Supported 00:36:41.981 Variable Capacity Management: Not Supported 00:36:41.981 Delete Endurance Group: Not Supported 00:36:41.981 Delete NVM Set: Not Supported 00:36:41.981 Extended LBA Formats Supported: Not Supported 00:36:41.981 Flexible Data Placement Supported: Not Supported 00:36:41.981 00:36:41.981 Controller Memory Buffer Support 00:36:41.981 ================================ 00:36:41.981 Supported: No 00:36:41.981 00:36:41.981 Persistent Memory Region Support 00:36:41.981 ================================ 00:36:41.981 Supported: No 00:36:41.981 00:36:41.981 Admin Command Set Attributes 00:36:41.981 ============================ 00:36:41.981 Security Send/Receive: Not Supported 00:36:41.981 Format NVM: Not Supported 00:36:41.981 Firmware Activate/Download: Not Supported 00:36:41.981 Namespace Management: Not Supported 00:36:41.981 Device Self-Test: Not Supported 00:36:41.981 Directives: Not Supported 00:36:41.981 NVMe-MI: Not Supported 00:36:41.981 Virtualization Management: Not Supported 00:36:41.981 Doorbell Buffer Config: Not Supported 00:36:41.981 Get LBA Status Capability: Not Supported 00:36:41.981 Command & Feature Lockdown Capability: Not Supported 00:36:41.981 Abort Command Limit: 4 00:36:41.981 Async Event Request Limit: 4 00:36:41.981 Number of Firmware Slots: N/A 00:36:41.981 Firmware Slot 1 Read-Only: N/A 00:36:41.981 Firmware Activation Without Reset: N/A 00:36:41.981 Multiple Update Detection Support: N/A 00:36:41.981 Firmware Update Granularity: No Information Provided 00:36:41.981 Per-Namespace SMART Log: Yes 00:36:41.981 Asymmetric Namespace Access Log Page: Supported 00:36:41.981 ANA Transition Time : 10 sec 00:36:41.981 00:36:41.981 Asymmetric Namespace Access Capabilities 00:36:41.981 ANA Optimized State : Supported 00:36:41.981 ANA Non-Optimized State : Supported 00:36:41.981 ANA Inaccessible State : Supported 00:36:41.981 ANA Persistent Loss State : Supported 00:36:41.981 ANA Change State : Supported 00:36:41.981 ANAGRPID is not changed : No 00:36:41.981 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:41.981 00:36:41.981 ANA Group Identifier Maximum : 128 00:36:41.981 Number of ANA Group Identifiers : 128 00:36:41.981 Max Number of Allowed Namespaces : 1024 00:36:41.981 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:41.981 Command Effects Log Page: Supported 00:36:41.981 Get Log Page Extended Data: Supported 00:36:41.981 Telemetry Log Pages: Not Supported 00:36:41.981 Persistent Event Log Pages: Not Supported 00:36:41.981 Supported Log Pages Log Page: May Support 00:36:41.981 Commands Supported & Effects Log Page: Not Supported 00:36:41.981 Feature Identifiers & Effects Log Page:May Support 00:36:41.981 NVMe-MI Commands & Effects Log Page: May Support 00:36:41.981 Data Area 4 for Telemetry Log: Not Supported 00:36:41.981 Error Log Page Entries Supported: 128 00:36:41.981 Keep Alive: Supported 00:36:41.981 Keep Alive Granularity: 1000 ms 00:36:41.981 00:36:41.981 NVM Command Set Attributes 00:36:41.981 ========================== 00:36:41.982 Submission Queue Entry Size 00:36:41.982 Max: 64 00:36:41.982 Min: 64 00:36:41.982 Completion Queue Entry Size 00:36:41.982 Max: 16 00:36:41.982 Min: 16 00:36:41.982 Number of Namespaces: 1024 00:36:41.982 Compare Command: Not Supported 00:36:41.982 Write Uncorrectable Command: Not Supported 00:36:41.982 Dataset Management Command: Supported 00:36:41.982 Write Zeroes Command: Supported 00:36:41.982 Set Features Save Field: Not Supported 00:36:41.982 Reservations: Not Supported 00:36:41.982 Timestamp: Not Supported 00:36:41.982 Copy: Not Supported 00:36:41.982 Volatile Write Cache: Present 00:36:41.982 Atomic Write Unit (Normal): 1 00:36:41.982 Atomic Write Unit (PFail): 1 00:36:41.982 Atomic Compare & Write Unit: 1 00:36:41.982 Fused Compare & Write: Not Supported 00:36:41.982 Scatter-Gather List 00:36:41.982 SGL Command Set: Supported 00:36:41.982 SGL Keyed: Not Supported 00:36:41.982 SGL Bit Bucket Descriptor: Not Supported 00:36:41.982 SGL Metadata Pointer: Not Supported 00:36:41.982 Oversized SGL: Not Supported 00:36:41.982 SGL Metadata Address: Not Supported 00:36:41.982 SGL Offset: Supported 00:36:41.982 Transport SGL Data Block: Not Supported 00:36:41.982 Replay Protected Memory Block: Not Supported 00:36:41.982 00:36:41.982 Firmware Slot Information 00:36:41.982 ========================= 00:36:41.982 Active slot: 0 00:36:41.982 00:36:41.982 Asymmetric Namespace Access 00:36:41.982 =========================== 00:36:41.982 Change Count : 0 00:36:41.982 Number of ANA Group Descriptors : 1 00:36:41.982 ANA Group Descriptor : 0 00:36:41.982 ANA Group ID : 1 00:36:41.982 Number of NSID Values : 1 00:36:41.982 Change Count : 0 00:36:41.982 ANA State : 1 00:36:41.982 Namespace Identifier : 1 00:36:41.982 00:36:41.982 Commands Supported and Effects 00:36:41.982 ============================== 00:36:41.982 Admin Commands 00:36:41.982 -------------- 00:36:41.982 Get Log Page (02h): Supported 00:36:41.982 Identify (06h): Supported 00:36:41.982 Abort (08h): Supported 00:36:41.982 Set Features (09h): Supported 00:36:41.982 Get Features (0Ah): Supported 00:36:41.982 Asynchronous Event Request (0Ch): Supported 00:36:41.982 Keep Alive (18h): Supported 00:36:41.982 I/O Commands 00:36:41.982 ------------ 00:36:41.982 Flush (00h): Supported 00:36:41.982 Write (01h): Supported LBA-Change 00:36:41.982 Read (02h): Supported 00:36:41.982 Write Zeroes (08h): Supported LBA-Change 00:36:41.982 Dataset Management (09h): Supported 00:36:41.982 00:36:41.982 Error Log 00:36:41.982 ========= 00:36:41.982 Entry: 0 00:36:41.982 Error Count: 0x3 00:36:41.982 Submission Queue Id: 0x0 00:36:41.982 Command Id: 0x5 00:36:41.982 Phase Bit: 0 00:36:41.982 Status Code: 0x2 00:36:41.982 Status Code Type: 0x0 00:36:41.982 Do Not Retry: 1 00:36:41.982 Error Location: 0x28 00:36:41.982 LBA: 0x0 00:36:41.982 Namespace: 0x0 00:36:41.982 Vendor Log Page: 0x0 00:36:41.982 ----------- 00:36:41.982 Entry: 1 00:36:41.982 Error Count: 0x2 00:36:41.982 Submission Queue Id: 0x0 00:36:41.982 Command Id: 0x5 00:36:41.982 Phase Bit: 0 00:36:41.982 Status Code: 0x2 00:36:41.982 Status Code Type: 0x0 00:36:41.982 Do Not Retry: 1 00:36:41.982 Error Location: 0x28 00:36:41.982 LBA: 0x0 00:36:41.982 Namespace: 0x0 00:36:41.982 Vendor Log Page: 0x0 00:36:41.982 ----------- 00:36:41.982 Entry: 2 00:36:41.982 Error Count: 0x1 00:36:41.982 Submission Queue Id: 0x0 00:36:41.982 Command Id: 0x4 00:36:41.982 Phase Bit: 0 00:36:41.982 Status Code: 0x2 00:36:41.982 Status Code Type: 0x0 00:36:41.982 Do Not Retry: 1 00:36:41.982 Error Location: 0x28 00:36:41.982 LBA: 0x0 00:36:41.982 Namespace: 0x0 00:36:41.982 Vendor Log Page: 0x0 00:36:41.982 00:36:41.982 Number of Queues 00:36:41.982 ================ 00:36:41.982 Number of I/O Submission Queues: 128 00:36:41.982 Number of I/O Completion Queues: 128 00:36:41.982 00:36:41.982 ZNS Specific Controller Data 00:36:41.982 ============================ 00:36:41.982 Zone Append Size Limit: 0 00:36:41.982 00:36:41.982 00:36:41.982 Active Namespaces 00:36:41.982 ================= 00:36:41.982 get_feature(0x05) failed 00:36:41.982 Namespace ID:1 00:36:41.982 Command Set Identifier: NVM (00h) 00:36:41.982 Deallocate: Supported 00:36:41.982 Deallocated/Unwritten Error: Not Supported 00:36:41.982 Deallocated Read Value: Unknown 00:36:41.982 Deallocate in Write Zeroes: Not Supported 00:36:41.982 Deallocated Guard Field: 0xFFFF 00:36:41.982 Flush: Supported 00:36:41.982 Reservation: Not Supported 00:36:41.982 Namespace Sharing Capabilities: Multiple Controllers 00:36:41.982 Size (in LBAs): 3750748848 (1788GiB) 00:36:41.982 Capacity (in LBAs): 3750748848 (1788GiB) 00:36:41.982 Utilization (in LBAs): 3750748848 (1788GiB) 00:36:41.982 UUID: 3bc2c312-acd7-4dce-a771-ea2c9874a673 00:36:41.982 Thin Provisioning: Not Supported 00:36:41.982 Per-NS Atomic Units: Yes 00:36:41.982 Atomic Write Unit (Normal): 8 00:36:41.982 Atomic Write Unit (PFail): 8 00:36:41.982 Preferred Write Granularity: 8 00:36:41.982 Atomic Compare & Write Unit: 8 00:36:41.982 Atomic Boundary Size (Normal): 0 00:36:41.982 Atomic Boundary Size (PFail): 0 00:36:41.982 Atomic Boundary Offset: 0 00:36:41.982 NGUID/EUI64 Never Reused: No 00:36:41.982 ANA group ID: 1 00:36:41.982 Namespace Write Protected: No 00:36:41.982 Number of LBA Formats: 1 00:36:41.982 Current LBA Format: LBA Format #00 00:36:41.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:41.982 00:36:41.982 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:41.982 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:41.982 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:42.243 rmmod nvme_tcp 00:36:42.243 rmmod nvme_fabrics 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:42.243 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:42.244 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:42.244 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:42.244 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:42.244 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.244 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:42.244 19:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:44.157 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:44.418 19:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:47.721 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:47.721 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:47.982 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:47.982 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:47.982 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:48.242 00:36:48.242 real 0m18.646s 00:36:48.242 user 0m5.158s 00:36:48.242 sys 0m10.447s 00:36:48.242 19:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:48.242 19:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:48.242 ************************************ 00:36:48.242 END TEST nvmf_identify_kernel_target 00:36:48.242 ************************************ 00:36:48.242 19:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:48.242 19:41:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:48.242 19:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:48.242 19:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:48.242 19:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.242 ************************************ 00:36:48.242 START TEST nvmf_auth_host 00:36:48.242 ************************************ 00:36:48.242 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:48.504 * Looking for test storage... 00:36:48.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:48.504 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:36:48.505 19:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.663 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.663 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:36:56.663 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:56.663 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:56.663 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:56.663 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:56.663 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:56.663 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:36:56.663 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:56.664 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:56.664 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:56.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:56.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:56.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:56.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:36:56.664 00:36:56.664 --- 10.0.0.2 ping statistics --- 00:36:56.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.664 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:56.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:56.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:36:56.664 00:36:56.664 --- 10.0.0.1 ping statistics --- 00:36:56.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.664 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3149250 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3149250 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3149250 ']' 00:36:56.664 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.665 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:56.665 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.665 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:56.665 19:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0bce3e2329d7cbe919c57b69474819dc 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Obl 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0bce3e2329d7cbe919c57b69474819dc 0 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0bce3e2329d7cbe919c57b69474819dc 0 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0bce3e2329d7cbe919c57b69474819dc 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Obl 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Obl 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Obl 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a0b63a4b715e81537ef773a26e1cfc4c4b05af501da2a0d556e8cc3221157fd3 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uri 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a0b63a4b715e81537ef773a26e1cfc4c4b05af501da2a0d556e8cc3221157fd3 3 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a0b63a4b715e81537ef773a26e1cfc4c4b05af501da2a0d556e8cc3221157fd3 3 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a0b63a4b715e81537ef773a26e1cfc4c4b05af501da2a0d556e8cc3221157fd3 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uri 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uri 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uri 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1e647c0ddf8c78b60432345e373a2ebaf320d1b9c5f364cf 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.p54 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1e647c0ddf8c78b60432345e373a2ebaf320d1b9c5f364cf 0 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1e647c0ddf8c78b60432345e373a2ebaf320d1b9c5f364cf 0 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1e647c0ddf8c78b60432345e373a2ebaf320d1b9c5f364cf 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.p54 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.p54 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.p54 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9130c58a03596b44993623ee54b6b2f6935f951f753e9db9 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VRd 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9130c58a03596b44993623ee54b6b2f6935f951f753e9db9 2 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9130c58a03596b44993623ee54b6b2f6935f951f753e9db9 2 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9130c58a03596b44993623ee54b6b2f6935f951f753e9db9 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:36:56.665 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VRd 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VRd 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.VRd 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0cbddcf8ecf5f6471aea04c5c3f0c891 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bR4 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0cbddcf8ecf5f6471aea04c5c3f0c891 1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0cbddcf8ecf5f6471aea04c5c3f0c891 1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0cbddcf8ecf5f6471aea04c5c3f0c891 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bR4 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bR4 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.bR4 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9022aa78a60042bed41855cab2148dfb 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qZq 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9022aa78a60042bed41855cab2148dfb 1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9022aa78a60042bed41855cab2148dfb 1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9022aa78a60042bed41855cab2148dfb 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qZq 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qZq 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qZq 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=582ecaf391f195da105dfea4994828715dff3593324b2078 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ftQ 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 582ecaf391f195da105dfea4994828715dff3593324b2078 2 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 582ecaf391f195da105dfea4994828715dff3593324b2078 2 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=582ecaf391f195da105dfea4994828715dff3593324b2078 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ftQ 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ftQ 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ftQ 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=593c7364646f2ab61db34449649a1511 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fhH 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 593c7364646f2ab61db34449649a1511 0 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 593c7364646f2ab61db34449649a1511 0 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=593c7364646f2ab61db34449649a1511 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fhH 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fhH 00:36:56.936 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fhH 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=96f3c0b08e2dd2da88a44c2891271d032005d14cd98f774cda58be53cd47b19b 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Qdk 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 96f3c0b08e2dd2da88a44c2891271d032005d14cd98f774cda58be53cd47b19b 3 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 96f3c0b08e2dd2da88a44c2891271d032005d14cd98f774cda58be53cd47b19b 3 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=96f3c0b08e2dd2da88a44c2891271d032005d14cd98f774cda58be53cd47b19b 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:36:57.202 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Qdk 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Qdk 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Qdk 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3149250 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3149250 ']' 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:57.203 19:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Obl 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uri ]] 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uri 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.p54 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.VRd ]] 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VRd 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.203 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.bR4 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qZq ]] 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qZq 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.463 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ftQ 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fhH ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fhH 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Qdk 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:57.464 19:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:00.765 Waiting for block devices as requested 00:37:00.765 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:00.765 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:00.765 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:00.765 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:00.765 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:01.026 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:01.026 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:01.026 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:01.026 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:01.287 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:01.591 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:01.591 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:01.591 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:01.591 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:01.865 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:01.865 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:01.865 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:02.825 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:02.825 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:02.825 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:02.825 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:37:02.825 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:02.826 No valid GPT data, bailing 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:02.826 00:37:02.826 Discovery Log Number of Records 2, Generation counter 2 00:37:02.826 =====Discovery Log Entry 0====== 00:37:02.826 trtype: tcp 00:37:02.826 adrfam: ipv4 00:37:02.826 subtype: current discovery subsystem 00:37:02.826 treq: not specified, sq flow control disable supported 00:37:02.826 portid: 1 00:37:02.826 trsvcid: 4420 00:37:02.826 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:02.826 traddr: 10.0.0.1 00:37:02.826 eflags: none 00:37:02.826 sectype: none 00:37:02.826 =====Discovery Log Entry 1====== 00:37:02.826 trtype: tcp 00:37:02.826 adrfam: ipv4 00:37:02.826 subtype: nvme subsystem 00:37:02.826 treq: not specified, sq flow control disable supported 00:37:02.826 portid: 1 00:37:02.826 trsvcid: 4420 00:37:02.826 subnqn: nqn.2024-02.io.spdk:cnode0 00:37:02.826 traddr: 10.0.0.1 00:37:02.826 eflags: none 00:37:02.826 sectype: none 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.826 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.087 nvme0n1 00:37:03.087 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.087 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.087 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.087 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.087 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.087 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.087 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:03.088 19:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.088 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.349 nvme0n1 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.349 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.610 nvme0n1 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:03.610 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.611 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.872 nvme0n1 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:03.872 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.873 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.133 nvme0n1 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:04.133 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.134 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.134 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:04.134 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.134 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:04.134 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:04.134 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:04.134 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:04.134 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.134 19:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.395 nvme0n1 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.395 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.655 nvme0n1 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:04.655 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.656 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.916 nvme0n1 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:04.916 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.917 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.177 nvme0n1 00:37:05.177 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.177 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.177 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.177 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.177 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.177 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.177 19:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:05.177 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:05.178 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:05.178 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:05.178 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.178 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.439 nvme0n1 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:05.439 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.440 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.701 nvme0n1 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.701 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.962 nvme0n1 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.962 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.223 19:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.485 nvme0n1 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.485 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.746 nvme0n1 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.746 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.007 nvme0n1 00:37:07.007 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.007 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.007 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.007 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.007 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.267 19:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:07.267 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.268 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.528 nvme0n1 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:07.528 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.529 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.100 nvme0n1 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:08.100 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.101 19:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.672 nvme0n1 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.672 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.244 nvme0n1 00:37:09.244 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.244 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.244 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.244 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.244 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.244 19:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.244 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.244 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.244 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.244 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.245 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.816 nvme0n1 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.816 19:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.388 nvme0n1 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:10.388 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.330 nvme0n1 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:11.330 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.331 19:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.902 nvme0n1 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.902 19:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.845 nvme0n1 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.845 19:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.787 nvme0n1 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.787 19:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.359 nvme0n1 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.359 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:14.619 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.620 nvme0n1 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:14.620 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.880 nvme0n1 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.880 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.140 nvme0n1 00:37:15.140 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.140 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.140 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.140 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.140 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.140 19:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:15.140 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.141 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.401 nvme0n1 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.401 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.661 nvme0n1 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:15.661 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.662 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.922 nvme0n1 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:15.922 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.923 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.183 nvme0n1 00:37:16.183 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.183 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.183 19:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.183 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.183 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.183 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.183 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.184 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.445 nvme0n1 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.445 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.706 nvme0n1 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:16.706 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.707 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.967 nvme0n1 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:16.967 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.968 19:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.539 nvme0n1 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.539 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.800 nvme0n1 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:17.800 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.801 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.062 nvme0n1 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:18.062 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.063 19:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.362 nvme0n1 00:37:18.362 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.362 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.362 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.362 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.362 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.362 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:18.622 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.623 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:18.623 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:18.623 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:18.623 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:18.623 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.623 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.883 nvme0n1 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:18.883 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.884 19:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.459 nvme0n1 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.459 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.030 nvme0n1 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:20.030 19:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.600 nvme0n1 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:20.600 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:20.601 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.172 nvme0n1 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.172 19:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.743 nvme0n1 00:37:21.743 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.743 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.743 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.743 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.743 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.744 19:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.315 nvme0n1 00:37:22.315 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.315 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.315 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.315 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.315 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.315 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.576 19:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.148 nvme0n1 00:37:23.148 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.148 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.148 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.148 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.148 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.148 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:23.408 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.409 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.980 nvme0n1 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.980 19:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.921 nvme0n1 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.921 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.922 19:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.865 nvme0n1 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:25.865 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.866 nvme0n1 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.866 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.128 nvme0n1 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.128 19:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.128 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.389 nvme0n1 00:37:26.389 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.389 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.389 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.389 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.390 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.651 nvme0n1 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.651 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.913 nvme0n1 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.913 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.174 nvme0n1 00:37:27.174 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.174 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.174 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.174 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.174 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.174 19:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:27.174 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.175 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.436 nvme0n1 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.436 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.698 nvme0n1 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.698 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.974 nvme0n1 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.974 19:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.235 nvme0n1 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.235 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.496 nvme0n1 00:37:28.496 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.758 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.020 nvme0n1 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.020 19:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.281 nvme0n1 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.281 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.542 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.542 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.542 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:37:29.542 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.542 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.542 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:29.542 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:29.542 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:29.542 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.543 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.804 nvme0n1 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.804 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.065 nvme0n1 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:30.065 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.066 19:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.638 nvme0n1 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.638 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:30.639 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:30.639 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:30.639 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:30.639 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.639 19:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.225 nvme0n1 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.225 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.226 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.798 nvme0n1 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.798 19:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.369 nvme0n1 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.369 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.370 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.942 nvme0n1 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:32.942 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJjZTNlMjMyOWQ3Y2JlOTE5YzU3YjY5NDc0ODE5ZGNKsd0q: 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: ]] 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBiNjNhNGI3MTVlODE1MzdlZjc3M2EyNmUxY2ZjNGM0YjA1YWY1MDFkYTJhMGQ1NTZlOGNjMzIyMTE1N2ZkM4NhCDg=: 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.943 19:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.886 nvme0n1 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.886 19:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.457 nvme0n1 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.457 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGNiZGRjZjhlY2Y1ZjY0NzFhZWEwNGM1YzNmMGM4OTGJNdwY: 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: ]] 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyMmFhNzhhNjAwNDJiZWQ0MTg1NWNhYjIxNDhkZmIh85XL: 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.458 19:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.400 nvme0n1 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTgyZWNhZjM5MWYxOTVkYTEwNWRmZWE0OTk0ODI4NzE1ZGZmMzU5MzMyNGIyMDc4A7xEKQ==: 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: ]] 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkzYzczNjQ2NDZmMmFiNjFkYjM0NDQ5NjQ5YTE1MTF0ptbC: 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.400 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.032 nvme0n1 00:37:36.032 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.032 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.032 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.032 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.032 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.032 19:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:36.293 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTZmM2MwYjA4ZTJkZDJkYTg4YTQ0YzI4OTEyNzFkMDMyMDA1ZDE0Y2Q5OGY3NzRjZGE1OGJlNTNjZDQ3YjE5YkV7e94=: 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.294 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.865 nvme0n1 00:37:36.866 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.866 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.866 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.866 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.866 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.866 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.127 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.127 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.127 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.127 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.127 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU2NDdjMGRkZjhjNzhiNjA0MzIzNDVlMzczYTJlYmFmMzIwZDFiOWM1ZjM2NGNm/lKaOA==: 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzMGM1OGEwMzU5NmI0NDk5MzYyM2VlNTRiNmIyZjY5MzVmOTUxZjc1M2U5ZGI5vcJcVA==: 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.128 request: 00:37:37.128 { 00:37:37.128 "name": "nvme0", 00:37:37.128 "trtype": "tcp", 00:37:37.128 "traddr": "10.0.0.1", 00:37:37.128 "adrfam": "ipv4", 00:37:37.128 "trsvcid": "4420", 00:37:37.128 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:37.128 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:37.128 "prchk_reftag": false, 00:37:37.128 "prchk_guard": false, 00:37:37.128 "hdgst": false, 00:37:37.128 "ddgst": false, 00:37:37.128 "method": "bdev_nvme_attach_controller", 00:37:37.128 "req_id": 1 00:37:37.128 } 00:37:37.128 Got JSON-RPC error response 00:37:37.128 response: 00:37:37.128 { 00:37:37.128 "code": -5, 00:37:37.128 "message": "Input/output error" 00:37:37.128 } 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.128 19:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.128 request: 00:37:37.128 { 00:37:37.128 "name": "nvme0", 00:37:37.128 "trtype": "tcp", 00:37:37.128 "traddr": "10.0.0.1", 00:37:37.128 "adrfam": "ipv4", 00:37:37.128 "trsvcid": "4420", 00:37:37.128 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:37.128 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:37.128 "prchk_reftag": false, 00:37:37.128 "prchk_guard": false, 00:37:37.128 "hdgst": false, 00:37:37.128 "ddgst": false, 00:37:37.128 "dhchap_key": "key2", 00:37:37.128 "method": "bdev_nvme_attach_controller", 00:37:37.128 "req_id": 1 00:37:37.128 } 00:37:37.128 Got JSON-RPC error response 00:37:37.128 response: 00:37:37.128 { 00:37:37.128 "code": -5, 00:37:37.128 "message": "Input/output error" 00:37:37.128 } 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.128 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.390 request: 00:37:37.390 { 00:37:37.390 "name": "nvme0", 00:37:37.390 "trtype": "tcp", 00:37:37.390 "traddr": "10.0.0.1", 00:37:37.390 "adrfam": "ipv4", 00:37:37.390 "trsvcid": "4420", 00:37:37.390 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:37.390 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:37.390 "prchk_reftag": false, 00:37:37.390 "prchk_guard": false, 00:37:37.390 "hdgst": false, 00:37:37.390 "ddgst": false, 00:37:37.390 "dhchap_key": "key1", 00:37:37.390 "dhchap_ctrlr_key": "ckey2", 00:37:37.390 "method": "bdev_nvme_attach_controller", 00:37:37.390 "req_id": 1 00:37:37.390 } 00:37:37.390 Got JSON-RPC error response 00:37:37.390 response: 00:37:37.390 { 00:37:37.390 "code": -5, 00:37:37.390 "message": "Input/output error" 00:37:37.390 } 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:37.390 rmmod nvme_tcp 00:37:37.390 rmmod nvme_fabrics 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3149250 ']' 00:37:37.390 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3149250 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3149250 ']' 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3149250 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3149250 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3149250' 00:37:37.391 killing process with pid 3149250 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3149250 00:37:37.391 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3149250 00:37:38.334 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:38.334 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:38.334 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:38.334 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:38.334 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:38.334 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.334 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:38.334 19:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:40.249 19:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:43.551 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:43.551 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:44.123 19:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Obl /tmp/spdk.key-null.p54 /tmp/spdk.key-sha256.bR4 /tmp/spdk.key-sha384.ftQ /tmp/spdk.key-sha512.Qdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:44.123 19:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:46.667 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:46.668 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:46.668 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:46.929 00:37:46.929 real 0m58.607s 00:37:46.929 user 0m52.456s 00:37:46.929 sys 0m14.658s 00:37:46.929 19:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:46.929 19:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.929 ************************************ 00:37:46.929 END TEST nvmf_auth_host 00:37:46.929 ************************************ 00:37:46.929 19:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:37:46.929 19:42:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:46.929 19:42:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:46.929 19:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:46.929 19:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:46.929 19:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.929 ************************************ 00:37:46.929 START TEST nvmf_digest 00:37:46.929 ************************************ 00:37:46.929 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:47.190 * Looking for test storage... 00:37:47.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:37:47.190 19:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:53.776 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:53.776 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:53.776 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:53.777 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:53.777 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:53.777 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:54.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:54.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:37:54.038 00:37:54.038 --- 10.0.0.2 ping statistics --- 00:37:54.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.038 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:54.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:54.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:37:54.038 00:37:54.038 --- 10.0.0.1 ping statistics --- 00:37:54.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.038 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:54.038 ************************************ 00:37:54.038 START TEST nvmf_digest_clean 00:37:54.038 ************************************ 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3166132 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3166132 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3166132 ']' 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:54.038 19:42:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:54.038 [2024-07-22 19:42:12.919471] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:54.038 [2024-07-22 19:42:12.919566] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:54.038 EAL: No free 2048 kB hugepages reported on node 1 00:37:54.298 [2024-07-22 19:42:13.040398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.298 [2024-07-22 19:42:13.219860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:54.298 [2024-07-22 19:42:13.219900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:54.298 [2024-07-22 19:42:13.219918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:54.298 [2024-07-22 19:42:13.219928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:54.298 [2024-07-22 19:42:13.219938] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:54.298 [2024-07-22 19:42:13.219966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.868 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:55.129 null0 00:37:55.129 [2024-07-22 19:42:13.949595] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.129 [2024-07-22 19:42:13.973820] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3166481 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3166481 /var/tmp/bperf.sock 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3166481 ']' 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:55.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:55.129 19:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:55.129 [2024-07-22 19:42:14.055150] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:55.129 [2024-07-22 19:42:14.055264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166481 ] 00:37:55.389 EAL: No free 2048 kB hugepages reported on node 1 00:37:55.389 [2024-07-22 19:42:14.183642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.650 [2024-07-22 19:42:14.358697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.910 19:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:55.910 19:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:55.910 19:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:55.910 19:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:55.910 19:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:56.482 19:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:56.482 19:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:56.482 nvme0n1 00:37:56.744 19:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:56.744 19:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:56.744 Running I/O for 2 seconds... 00:37:58.660 00:37:58.660 Latency(us) 00:37:58.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.660 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:58.660 nvme0n1 : 2.00 18201.84 71.10 0.00 0.00 7023.40 3031.04 17803.95 00:37:58.660 =================================================================================================================== 00:37:58.660 Total : 18201.84 71.10 0.00 0.00 7023.40 3031.04 17803.95 00:37:58.660 0 00:37:58.660 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:58.660 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:58.660 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:58.660 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:58.660 | select(.opcode=="crc32c") 00:37:58.660 | "\(.module_name) \(.executed)"' 00:37:58.660 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3166481 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3166481 ']' 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3166481 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3166481 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3166481' 00:37:58.921 killing process with pid 3166481 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3166481 00:37:58.921 Received shutdown signal, test time was about 2.000000 seconds 00:37:58.921 00:37:58.921 Latency(us) 00:37:58.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.921 =================================================================================================================== 00:37:58.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:58.921 19:42:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3166481 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3167172 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3167172 /var/tmp/bperf.sock 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3167172 ']' 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:59.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:59.494 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:59.495 19:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:59.770 [2024-07-22 19:42:18.464594] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:59.770 [2024-07-22 19:42:18.464702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167172 ] 00:37:59.770 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:59.770 Zero copy mechanism will not be used. 00:37:59.770 EAL: No free 2048 kB hugepages reported on node 1 00:37:59.770 [2024-07-22 19:42:18.583609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.103 [2024-07-22 19:42:18.719299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:00.362 19:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:00.362 19:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:00.362 19:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:00.362 19:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:00.362 19:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:00.622 19:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:00.622 19:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:01.203 nvme0n1 00:38:01.203 19:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:01.203 19:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:01.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:01.203 Zero copy mechanism will not be used. 00:38:01.203 Running I/O for 2 seconds... 00:38:03.117 00:38:03.117 Latency(us) 00:38:03.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.117 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:03.117 nvme0n1 : 2.00 2878.08 359.76 0.00 0.00 5555.69 1447.25 15837.87 00:38:03.117 =================================================================================================================== 00:38:03.117 Total : 2878.08 359.76 0.00 0.00 5555.69 1447.25 15837.87 00:38:03.117 0 00:38:03.117 19:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:03.117 19:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:03.117 19:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:03.117 19:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:03.117 | select(.opcode=="crc32c") 00:38:03.117 | "\(.module_name) \(.executed)"' 00:38:03.117 19:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3167172 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3167172 ']' 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3167172 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3167172 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3167172' 00:38:03.378 killing process with pid 3167172 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3167172 00:38:03.378 Received shutdown signal, test time was about 2.000000 seconds 00:38:03.378 00:38:03.378 Latency(us) 00:38:03.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.378 =================================================================================================================== 00:38:03.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:03.378 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3167172 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3168072 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3168072 /var/tmp/bperf.sock 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3168072 ']' 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:03.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:03.950 19:42:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:03.951 [2024-07-22 19:42:22.786693] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:03.951 [2024-07-22 19:42:22.786802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168072 ] 00:38:03.951 EAL: No free 2048 kB hugepages reported on node 1 00:38:03.951 [2024-07-22 19:42:22.884490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.212 [2024-07-22 19:42:23.020171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:04.782 19:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:04.783 19:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:04.783 19:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:04.783 19:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:04.783 19:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:05.043 19:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:05.043 19:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:05.304 nvme0n1 00:38:05.304 19:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:05.304 19:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:05.304 Running I/O for 2 seconds... 00:38:07.850 00:38:07.850 Latency(us) 00:38:07.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:07.850 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.850 nvme0n1 : 2.01 19666.85 76.82 0.00 0.00 6499.70 4123.31 17803.95 00:38:07.850 =================================================================================================================== 00:38:07.850 Total : 19666.85 76.82 0.00 0.00 6499.70 4123.31 17803.95 00:38:07.850 0 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:07.850 | select(.opcode=="crc32c") 00:38:07.850 | "\(.module_name) \(.executed)"' 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3168072 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3168072 ']' 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3168072 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3168072 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3168072' 00:38:07.850 killing process with pid 3168072 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3168072 00:38:07.850 Received shutdown signal, test time was about 2.000000 seconds 00:38:07.850 00:38:07.850 Latency(us) 00:38:07.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:07.850 =================================================================================================================== 00:38:07.850 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:07.850 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3168072 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3168872 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3168872 /var/tmp/bperf.sock 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3168872 ']' 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:08.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:08.111 19:42:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:08.111 [2024-07-22 19:42:27.050999] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:08.111 [2024-07-22 19:42:27.051114] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168872 ] 00:38:08.111 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:08.111 Zero copy mechanism will not be used. 00:38:08.372 EAL: No free 2048 kB hugepages reported on node 1 00:38:08.372 [2024-07-22 19:42:27.172509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.372 [2024-07-22 19:42:27.308600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.943 19:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:08.943 19:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:08.943 19:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:08.943 19:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:08.943 19:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:09.204 19:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:09.204 19:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:09.464 nvme0n1 00:38:09.464 19:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:09.464 19:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:09.464 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:09.464 Zero copy mechanism will not be used. 00:38:09.464 Running I/O for 2 seconds... 00:38:12.010 00:38:12.010 Latency(us) 00:38:12.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.010 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:12.010 nvme0n1 : 2.00 4353.52 544.19 0.00 0.00 3670.10 2034.35 9338.88 00:38:12.010 =================================================================================================================== 00:38:12.010 Total : 4353.52 544.19 0.00 0.00 3670.10 2034.35 9338.88 00:38:12.010 0 00:38:12.010 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:12.010 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:12.010 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:12.010 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:12.010 | select(.opcode=="crc32c") 00:38:12.010 | "\(.module_name) \(.executed)"' 00:38:12.010 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:12.010 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:12.010 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:12.010 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3168872 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3168872 ']' 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3168872 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3168872 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3168872' 00:38:12.011 killing process with pid 3168872 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3168872 00:38:12.011 Received shutdown signal, test time was about 2.000000 seconds 00:38:12.011 00:38:12.011 Latency(us) 00:38:12.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.011 =================================================================================================================== 00:38:12.011 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:12.011 19:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3168872 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3166132 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3166132 ']' 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3166132 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3166132 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3166132' 00:38:12.272 killing process with pid 3166132 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3166132 00:38:12.272 19:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3166132 00:38:13.214 00:38:13.214 real 0m19.265s 00:38:13.214 user 0m36.552s 00:38:13.214 sys 0m3.605s 00:38:13.214 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:13.214 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:13.214 ************************************ 00:38:13.214 END TEST nvmf_digest_clean 00:38:13.214 ************************************ 00:38:13.214 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:38:13.214 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:38:13.214 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:13.214 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:13.214 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:13.475 ************************************ 00:38:13.475 START TEST nvmf_digest_error 00:38:13.475 ************************************ 00:38:13.475 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:38:13.475 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:38:13.475 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:13.475 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:13.475 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:13.475 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3169911 00:38:13.476 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3169911 00:38:13.476 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:13.476 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3169911 ']' 00:38:13.476 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.476 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:13.476 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.476 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:13.476 19:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:13.476 [2024-07-22 19:42:32.261976] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:13.476 [2024-07-22 19:42:32.262078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.476 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.476 [2024-07-22 19:42:32.381602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.737 [2024-07-22 19:42:32.557793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.737 [2024-07-22 19:42:32.557836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.737 [2024-07-22 19:42:32.557849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.737 [2024-07-22 19:42:32.557858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.737 [2024-07-22 19:42:32.557870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.737 [2024-07-22 19:42:32.557898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:14.309 [2024-07-22 19:42:33.047820] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.309 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:14.568 null0 00:38:14.568 [2024-07-22 19:42:33.299148] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.568 [2024-07-22 19:42:33.323370] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3170100 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3170100 /var/tmp/bperf.sock 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3170100 ']' 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:14.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:14.568 19:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:14.568 [2024-07-22 19:42:33.403940] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:14.568 [2024-07-22 19:42:33.404046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170100 ] 00:38:14.568 EAL: No free 2048 kB hugepages reported on node 1 00:38:14.828 [2024-07-22 19:42:33.525751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.828 [2024-07-22 19:42:33.662016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:15.398 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:15.970 nvme0n1 00:38:15.970 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:15.970 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:15.970 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:15.970 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:15.970 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:15.970 19:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:15.970 Running I/O for 2 seconds... 00:38:15.970 [2024-07-22 19:42:34.803443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:15.970 [2024-07-22 19:42:34.803486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.970 [2024-07-22 19:42:34.803500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:15.970 [2024-07-22 19:42:34.817515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:15.970 [2024-07-22 19:42:34.817543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.970 [2024-07-22 19:42:34.817554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:15.970 [2024-07-22 19:42:34.831459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:15.970 [2024-07-22 19:42:34.831484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.970 [2024-07-22 19:42:34.831494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:15.970 [2024-07-22 19:42:34.844446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:15.970 [2024-07-22 19:42:34.844470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.970 [2024-07-22 19:42:34.844479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:15.970 [2024-07-22 19:42:34.857188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:15.970 [2024-07-22 19:42:34.857217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.970 [2024-07-22 19:42:34.857226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:15.970 [2024-07-22 19:42:34.871850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:15.970 [2024-07-22 19:42:34.871873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.970 [2024-07-22 19:42:34.871883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:15.970 [2024-07-22 19:42:34.887364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:15.970 [2024-07-22 19:42:34.887386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.970 [2024-07-22 19:42:34.887396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:15.970 [2024-07-22 19:42:34.900901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:15.970 [2024-07-22 19:42:34.900924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.970 [2024-07-22 19:42:34.900933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:15.970 [2024-07-22 19:42:34.912570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:15.970 [2024-07-22 19:42:34.912592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.970 [2024-07-22 19:42:34.912601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.232 [2024-07-22 19:42:34.926844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.232 [2024-07-22 19:42:34.926867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.232 [2024-07-22 19:42:34.926876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.232 [2024-07-22 19:42:34.940800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.232 [2024-07-22 19:42:34.940823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.232 [2024-07-22 19:42:34.940832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.232 [2024-07-22 19:42:34.955231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:34.955254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:34.955264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:34.968272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:34.968293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:34.968302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:34.980910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:34.980933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:34.980942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:34.995322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:34.995344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:34.995353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.009103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.009125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.009138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.023219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.023241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.023250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.036980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.037003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.037012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.049439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.049461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.049470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.064223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.064245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.064254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.077425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.077448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.077459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.090065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.090087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.090096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.104237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.104259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.104268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.117591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.117613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.117622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.132660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.132685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.132694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.146110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.146132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.146141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.159572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.159595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.159604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.233 [2024-07-22 19:42:35.172469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.233 [2024-07-22 19:42:35.172492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.233 [2024-07-22 19:42:35.172500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.186346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.186369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.186378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.201093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.201117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.201126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.213888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.213910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.213919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.227586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.227608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.227618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.241678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.241701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.241713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.254812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.254834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.254843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.268739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.268761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.268770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.282768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.282791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.282800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.295383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.295405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.295414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.309869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.309892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.309900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.323379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.323402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.323410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.336926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.336947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.336956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.350606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.350629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.350637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.363746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.363772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.363780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.376649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.376672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.376681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.391158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.391181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.391190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.404612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.404634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.404643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.420629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.420651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.420667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.495 [2024-07-22 19:42:35.435004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.495 [2024-07-22 19:42:35.435026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.495 [2024-07-22 19:42:35.435035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.447138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.447161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.447170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.461568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.461591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.461599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.475395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.475417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.475430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.488428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.488450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.488459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.501863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.501886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.501894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.516798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.516820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.516829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.530035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.530057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.530066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.543984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.544006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.544014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.557460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.557482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.557492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.569951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.569973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.569982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.583443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.583465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.583474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.598743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.598768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.598777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.611339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.611361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.611370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.624986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.625008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.625017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.638365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.638387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.638396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.652449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.652472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.652481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.664754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.664775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.664784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.679381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.679403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.679411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.692974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.692996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.693005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:16.757 [2024-07-22 19:42:35.707282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:16.757 [2024-07-22 19:42:35.707304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.757 [2024-07-22 19:42:35.707318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.719934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.719957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.719965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.733082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.733104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.733112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.747404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.747426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.747434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.760518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.760540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.760548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.775844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.775866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.775875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.787672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.787694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.787702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.802247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.802271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.802281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.815969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.815991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.816000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.831704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.831730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.831739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.845425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.845447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.845456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.858455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.858476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.858485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.871218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.871239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.871248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.886085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.886107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.886116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.900270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.900292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.900302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.912034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.912055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.912064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.926812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.926834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.926843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.940033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.940055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.940066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.952843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.952865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.018 [2024-07-22 19:42:35.952873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.018 [2024-07-22 19:42:35.966424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.018 [2024-07-22 19:42:35.966446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.019 [2024-07-22 19:42:35.966454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:35.980267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:35.980288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:35.980297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:35.992842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:35.992864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:35.992873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.006673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.006701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.006710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.021404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.021425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.021434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.034831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.034852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.034861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.048138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.048160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.048168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.062638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.062663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.062671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.074365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.074386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.074395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.088024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.088046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.088055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.103318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.103340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.103348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.116263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.116284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.116293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.130349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.130371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.130380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.143376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.143397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.143406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.157574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.157595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.157604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.172241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.172263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.172272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.184291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.184313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.184321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.197166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.197188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.197197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.212089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.212110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.212119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.279 [2024-07-22 19:42:36.226985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.279 [2024-07-22 19:42:36.227008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.279 [2024-07-22 19:42:36.227016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.238694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.238716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.238725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.252249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.252271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.252279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.267237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.267259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.267268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.280712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.280734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.280743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.293509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.293534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.293543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.307398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.307419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.307428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.320278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.320299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.320308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.333928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.333949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.333958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.347238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.347260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.347268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.361765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.361787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.361796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.374629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.374650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.374659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.388862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.540 [2024-07-22 19:42:36.388883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.540 [2024-07-22 19:42:36.388892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.540 [2024-07-22 19:42:36.402063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.541 [2024-07-22 19:42:36.402085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.541 [2024-07-22 19:42:36.402094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.541 [2024-07-22 19:42:36.417194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.541 [2024-07-22 19:42:36.417220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.541 [2024-07-22 19:42:36.417229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.541 [2024-07-22 19:42:36.429816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.541 [2024-07-22 19:42:36.429838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.541 [2024-07-22 19:42:36.429847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.541 [2024-07-22 19:42:36.444212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.541 [2024-07-22 19:42:36.444234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.541 [2024-07-22 19:42:36.444243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.541 [2024-07-22 19:42:36.457052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.541 [2024-07-22 19:42:36.457073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.541 [2024-07-22 19:42:36.457082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.541 [2024-07-22 19:42:36.471197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.541 [2024-07-22 19:42:36.471224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.541 [2024-07-22 19:42:36.471233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.541 [2024-07-22 19:42:36.484981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.541 [2024-07-22 19:42:36.485003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.541 [2024-07-22 19:42:36.485011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.802 [2024-07-22 19:42:36.498458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.802 [2024-07-22 19:42:36.498480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.802 [2024-07-22 19:42:36.498488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.802 [2024-07-22 19:42:36.510668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.802 [2024-07-22 19:42:36.510689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.802 [2024-07-22 19:42:36.510698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.802 [2024-07-22 19:42:36.526355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.802 [2024-07-22 19:42:36.526380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.802 [2024-07-22 19:42:36.526389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.802 [2024-07-22 19:42:36.538464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.802 [2024-07-22 19:42:36.538486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.802 [2024-07-22 19:42:36.538495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.802 [2024-07-22 19:42:36.552841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.802 [2024-07-22 19:42:36.552863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.802 [2024-07-22 19:42:36.552871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.802 [2024-07-22 19:42:36.565869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.802 [2024-07-22 19:42:36.565892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.802 [2024-07-22 19:42:36.565901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.802 [2024-07-22 19:42:36.579496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.802 [2024-07-22 19:42:36.579518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.802 [2024-07-22 19:42:36.579526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.802 [2024-07-22 19:42:36.593079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.802 [2024-07-22 19:42:36.593102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.593111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.608317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.608338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.608347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.620378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.620400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.620409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.633622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.633644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.633653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.647436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.647457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.647465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.661863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.661886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.661894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.675318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.675340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.675349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.687459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.687481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.687490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.702919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.702941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.702950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.715388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.715411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.715419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.729213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.729236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.729245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:17.803 [2024-07-22 19:42:36.742683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:17.803 [2024-07-22 19:42:36.742705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.803 [2024-07-22 19:42:36.742714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:18.064 [2024-07-22 19:42:36.756279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:18.064 [2024-07-22 19:42:36.756305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.064 [2024-07-22 19:42:36.756314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:18.064 [2024-07-22 19:42:36.771935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:18.064 [2024-07-22 19:42:36.771957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.064 [2024-07-22 19:42:36.771967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:18.064 [2024-07-22 19:42:36.785994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:18.064 [2024-07-22 19:42:36.786017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.064 [2024-07-22 19:42:36.786025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:18.064 00:38:18.064 Latency(us) 00:38:18.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.064 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:18.064 nvme0n1 : 2.00 18630.61 72.78 0.00 0.00 6864.14 2894.51 20425.39 00:38:18.064 =================================================================================================================== 00:38:18.064 Total : 18630.61 72.78 0.00 0.00 6864.14 2894.51 20425.39 00:38:18.064 0 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:18.064 | .driver_specific 00:38:18.064 | .nvme_error 00:38:18.064 | .status_code 00:38:18.064 | .command_transient_transport_error' 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3170100 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3170100 ']' 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3170100 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:18.064 19:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3170100 00:38:18.325 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:18.325 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:18.325 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3170100' 00:38:18.325 killing process with pid 3170100 00:38:18.325 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3170100 00:38:18.325 Received shutdown signal, test time was about 2.000000 seconds 00:38:18.325 00:38:18.325 Latency(us) 00:38:18.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.325 =================================================================================================================== 00:38:18.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:18.325 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3170100 00:38:18.585 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:38:18.585 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:18.585 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:18.585 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:18.585 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:18.585 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3170949 00:38:18.585 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3170949 /var/tmp/bperf.sock 00:38:18.585 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3170949 ']' 00:38:18.585 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:38:18.879 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:18.879 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:18.879 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:18.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:18.879 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:18.880 19:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:18.880 [2024-07-22 19:42:37.622010] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:18.880 [2024-07-22 19:42:37.622125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170949 ] 00:38:18.880 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:18.880 Zero copy mechanism will not be used. 00:38:18.880 EAL: No free 2048 kB hugepages reported on node 1 00:38:18.880 [2024-07-22 19:42:37.743235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.140 [2024-07-22 19:42:37.879231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.400 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:19.400 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:19.400 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:19.400 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:19.661 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:19.661 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.661 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:19.661 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.661 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:19.661 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:19.921 nvme0n1 00:38:19.921 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:19.921 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.921 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:19.921 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.921 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:19.921 19:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:20.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:20.181 Zero copy mechanism will not be used. 00:38:20.181 Running I/O for 2 seconds... 00:38:20.181 [2024-07-22 19:42:38.940874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.181 [2024-07-22 19:42:38.940916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:38.940929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:38.951370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:38.951398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:38.951409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:38.960169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:38.960194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:38.960212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:38.968479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:38.968504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:38.968514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:38.976154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:38.976178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:38.976188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:38.983414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:38.983437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:38.983450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:38.990432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:38.990456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:38.990466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:38.997648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:38.997672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:38.997681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.004610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.004634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.004643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.013179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.013208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.013222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.021865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.021887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.021896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.030432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.030456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.030466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.038007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.038032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.038041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.046546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.046570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.046579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.056592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.056619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.056628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.066239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.066261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.066271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.075896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.075921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.075931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.083766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.083790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.083799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.092077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.092101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.092110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.100947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.100971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.100980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.109077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.109101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.109110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.117787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.117810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.117819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.182 [2024-07-22 19:42:39.127638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.182 [2024-07-22 19:42:39.127661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.182 [2024-07-22 19:42:39.127675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.443 [2024-07-22 19:42:39.136051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.443 [2024-07-22 19:42:39.136075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.443 [2024-07-22 19:42:39.136084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.145132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.145156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.145166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.154137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.154160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.154169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.163638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.163661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.163670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.172079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.172111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.181744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.181767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.181778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.187309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.187331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.187342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.196097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.196121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.196130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.204326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.204354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.204364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.212247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.212269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.212278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.222645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.222667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.222676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.231753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.231775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.231784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.240477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.240500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.240509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.247918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.247942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.247951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.255540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.255563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.255573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.263944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.263968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.263977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.273497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.273521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.273534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.281908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.281932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.281941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.290161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.290185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.290194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.299825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.299848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.299856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.308535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.308560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.308569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.318618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.318642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.318651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.328323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.328348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.328364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.336295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.336319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.336328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.342866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.342889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.342901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.351492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.351520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.351530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.361008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.361031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.361040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.369273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.369299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.369308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.377000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.377025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.377035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.384383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.384407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.384416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.444 [2024-07-22 19:42:39.394717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.444 [2024-07-22 19:42:39.394739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.444 [2024-07-22 19:42:39.394748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.705 [2024-07-22 19:42:39.404517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.705 [2024-07-22 19:42:39.404542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.705 [2024-07-22 19:42:39.404551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.705 [2024-07-22 19:42:39.413095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.705 [2024-07-22 19:42:39.413119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.705 [2024-07-22 19:42:39.413128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.705 [2024-07-22 19:42:39.419987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.705 [2024-07-22 19:42:39.420011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.705 [2024-07-22 19:42:39.420020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.705 [2024-07-22 19:42:39.430165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.705 [2024-07-22 19:42:39.430189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.705 [2024-07-22 19:42:39.430198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.705 [2024-07-22 19:42:39.439981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.705 [2024-07-22 19:42:39.440006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.705 [2024-07-22 19:42:39.440015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.705 [2024-07-22 19:42:39.448973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.705 [2024-07-22 19:42:39.448997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.705 [2024-07-22 19:42:39.449005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.705 [2024-07-22 19:42:39.457125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.457149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.457157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.466296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.466320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.466329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.475697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.475722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.475731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.485669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.485691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.485700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.496804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.496828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.496837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.508199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.508232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.508241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.518388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.518413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.518422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.529242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.529266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.529275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.539983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.540007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.540016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.550009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.550033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.550043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.559798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.559824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.559833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.569293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.569318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.569327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.580550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.580575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.580584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.590991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.591016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.591025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.601823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.601847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.601856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.612894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.612918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.612927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.622785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.622808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.622817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.633672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.633698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.633708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.645656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.645680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.645690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.706 [2024-07-22 19:42:39.655754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.706 [2024-07-22 19:42:39.655779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.706 [2024-07-22 19:42:39.655788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.967 [2024-07-22 19:42:39.666797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.967 [2024-07-22 19:42:39.666822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.967 [2024-07-22 19:42:39.666831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.967 [2024-07-22 19:42:39.677483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.967 [2024-07-22 19:42:39.677507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.967 [2024-07-22 19:42:39.677516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.967 [2024-07-22 19:42:39.687681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.967 [2024-07-22 19:42:39.687705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.967 [2024-07-22 19:42:39.687718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.967 [2024-07-22 19:42:39.699521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.967 [2024-07-22 19:42:39.699545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.967 [2024-07-22 19:42:39.699553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.967 [2024-07-22 19:42:39.710251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.967 [2024-07-22 19:42:39.710275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.967 [2024-07-22 19:42:39.710284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.720810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.720834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.720843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.731416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.731442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.731452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.742719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.742744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.742754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.753087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.753118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.753127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.762368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.762393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.762401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.771114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.771139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.771148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.779232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.779257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.779266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.786875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.786899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.786908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.794237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.794260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.794269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.801169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.801194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.801208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.807765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.807789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.807798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.814401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.814425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.814433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.821089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.821114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.821123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.827715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.827739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.827749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.834258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.834282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.834295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.840763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.840788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.840796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.846968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.846993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.847002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.853338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.853362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.853371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.859429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.859454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.859463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.865659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.865684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.865693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.871736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.871760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.871769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.878142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.878166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.878175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.884831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.884854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.884863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.891267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.891291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.891299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.897879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.897902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.897911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.904723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.904747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.968 [2024-07-22 19:42:39.904756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.968 [2024-07-22 19:42:39.911855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.968 [2024-07-22 19:42:39.911880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.969 [2024-07-22 19:42:39.911889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.969 [2024-07-22 19:42:39.918136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:20.969 [2024-07-22 19:42:39.918160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.969 [2024-07-22 19:42:39.918169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.268 [2024-07-22 19:42:39.924786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.268 [2024-07-22 19:42:39.924810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.268 [2024-07-22 19:42:39.924819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.931435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.931459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.931467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.937958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.937983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.937991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.944682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.944706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.944722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.951158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.951182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.951191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.957587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.957612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.957621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.963880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.963904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.963912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.970155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.970188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.976661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.976684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.976693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.983396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.983420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.983429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.990103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.990127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.990137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:39.996903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:39.996927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:39.996936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.004032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.004060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.004069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.010605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.010631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.010640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.017100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.017126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.017135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.023697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.023721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.023731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.030179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.030210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.030219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.036659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.036683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.036692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.043278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.043302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.043311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.049575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.049599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.056071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.056094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.056107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.062761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.062785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.062794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.070160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.070184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.070193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.078030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.078055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.269 [2024-07-22 19:42:40.078064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.269 [2024-07-22 19:42:40.085111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.269 [2024-07-22 19:42:40.085135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.085144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.092259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.092282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.092291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.098668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.098692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.098701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.105435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.105457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.105466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.111821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.111846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.111855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.118073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.118102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.118111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.124600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.124625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.124635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.131030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.131054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.131063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.137385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.137410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.137419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.143640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.143664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.143673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.149939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.149963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.149973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.156402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.156425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.156434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.162787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.162812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.162821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.169124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.169149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.169161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.175553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.175578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.175586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.270 [2024-07-22 19:42:40.182261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.270 [2024-07-22 19:42:40.182285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.270 [2024-07-22 19:42:40.182295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.532 [2024-07-22 19:42:40.188834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.532 [2024-07-22 19:42:40.188859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.532 [2024-07-22 19:42:40.188869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.532 [2024-07-22 19:42:40.195162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.532 [2024-07-22 19:42:40.195186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.532 [2024-07-22 19:42:40.195196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.532 [2024-07-22 19:42:40.201657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.532 [2024-07-22 19:42:40.201681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.532 [2024-07-22 19:42:40.201690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.532 [2024-07-22 19:42:40.207941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.532 [2024-07-22 19:42:40.207966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.532 [2024-07-22 19:42:40.207975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.532 [2024-07-22 19:42:40.214219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.532 [2024-07-22 19:42:40.214243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.532 [2024-07-22 19:42:40.214252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.532 [2024-07-22 19:42:40.220647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.532 [2024-07-22 19:42:40.220671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.532 [2024-07-22 19:42:40.220681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.532 [2024-07-22 19:42:40.226914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.532 [2024-07-22 19:42:40.226941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.532 [2024-07-22 19:42:40.226950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.532 [2024-07-22 19:42:40.233283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.532 [2024-07-22 19:42:40.233308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.532 [2024-07-22 19:42:40.233318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.532 [2024-07-22 19:42:40.239390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.239414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.239423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.245512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.245536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.245545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.251656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.251680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.251689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.258120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.258144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.258153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.264582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.264606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.264616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.270806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.270830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.270839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.277129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.277153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.277163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.283348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.283373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.283382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.289809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.289833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.289842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.296227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.296251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.296260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.302291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.302315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.302323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.308352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.308376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.308385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.314429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.314453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.314468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.320817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.320842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.320850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.327105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.327129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.327138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.332993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.333021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.333029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.339014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.339038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.339047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.345251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.345274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.345283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.351546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.351569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.351578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.357623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.357647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.357655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.363666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.363690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.363699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.369855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.369879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.369888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.376179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.376212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.376221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.382383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.382407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.382415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.388462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.388486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.388494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.394541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.394564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.394573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.533 [2024-07-22 19:42:40.400769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.533 [2024-07-22 19:42:40.400792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.533 [2024-07-22 19:42:40.400802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.407141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.407164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.407173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.413213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.413236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.413246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.419468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.419490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.419499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.425535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.425560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.425569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.431393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.431417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.431426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.437434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.437458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.437471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.443423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.443447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.443455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.449380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.449404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.449412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.455415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.455439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.455447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.461468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.461493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.461501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.467502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.467526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.467535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.473678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.473702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.473710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.534 [2024-07-22 19:42:40.479733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.534 [2024-07-22 19:42:40.479759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.534 [2024-07-22 19:42:40.479768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.795 [2024-07-22 19:42:40.485786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.795 [2024-07-22 19:42:40.485811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.795 [2024-07-22 19:42:40.485820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.795 [2024-07-22 19:42:40.491930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.795 [2024-07-22 19:42:40.491954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.795 [2024-07-22 19:42:40.491963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.795 [2024-07-22 19:42:40.498141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.795 [2024-07-22 19:42:40.498165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.795 [2024-07-22 19:42:40.498174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.795 [2024-07-22 19:42:40.504293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.795 [2024-07-22 19:42:40.504315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.795 [2024-07-22 19:42:40.504323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.795 [2024-07-22 19:42:40.510320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.795 [2024-07-22 19:42:40.510344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.510353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.516423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.516445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.516454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.522545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.522569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.522578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.528679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.528703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.528712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.534732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.534756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.534765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.540780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.540804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.540816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.546846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.546870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.546879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.554110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.554134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.554143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.562855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.562882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.562892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.571758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.571784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.571793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.580985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.581009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.581019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.591459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.591504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.591513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.601756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.601780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.601789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.612644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.612668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.612678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.624255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.624280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.624289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.635248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.635272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.635281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.646297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.646322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.796 [2024-07-22 19:42:40.646330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.796 [2024-07-22 19:42:40.657761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.796 [2024-07-22 19:42:40.657785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.657794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.797 [2024-07-22 19:42:40.669008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.797 [2024-07-22 19:42:40.669032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.669041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.797 [2024-07-22 19:42:40.679373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.797 [2024-07-22 19:42:40.679397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.679406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.797 [2024-07-22 19:42:40.690983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.797 [2024-07-22 19:42:40.691008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.691018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.797 [2024-07-22 19:42:40.702484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.797 [2024-07-22 19:42:40.702510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.702519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.797 [2024-07-22 19:42:40.712339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.797 [2024-07-22 19:42:40.712363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.712375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.797 [2024-07-22 19:42:40.721076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.797 [2024-07-22 19:42:40.721101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.721110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.797 [2024-07-22 19:42:40.729012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.797 [2024-07-22 19:42:40.729037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.729046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.797 [2024-07-22 19:42:40.736722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.797 [2024-07-22 19:42:40.736747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.736757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.797 [2024-07-22 19:42:40.743786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:21.797 [2024-07-22 19:42:40.743811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.797 [2024-07-22 19:42:40.743819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:22.058 [2024-07-22 19:42:40.750726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.058 [2024-07-22 19:42:40.750751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.058 [2024-07-22 19:42:40.750759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:22.058 [2024-07-22 19:42:40.758954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.058 [2024-07-22 19:42:40.758979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.058 [2024-07-22 19:42:40.758987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:22.058 [2024-07-22 19:42:40.767928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.058 [2024-07-22 19:42:40.767952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.058 [2024-07-22 19:42:40.767961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:22.058 [2024-07-22 19:42:40.776861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.058 [2024-07-22 19:42:40.776888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.058 [2024-07-22 19:42:40.776897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:22.058 [2024-07-22 19:42:40.786345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.786370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.786379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.795518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.795542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.795551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.804588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.804613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.804623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.814369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.814394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.814403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.824234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.824258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.824267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.835102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.835127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.835135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.846272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.846296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.846305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.856343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.856367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.856376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.865952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.865976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.865989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.874593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.874618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.874627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.883161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.883186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.883196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.892483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.892508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.892517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.903618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.903643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.903652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.913113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.913138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.913148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:22.059 [2024-07-22 19:42:40.922494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:22.059 [2024-07-22 19:42:40.922518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:22.059 [2024-07-22 19:42:40.922527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:22.059 00:38:22.059 Latency(us) 00:38:22.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.059 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:22.059 nvme0n1 : 2.00 3901.09 487.64 0.00 0.00 4098.42 1631.57 14417.92 00:38:22.059 =================================================================================================================== 00:38:22.059 Total : 3901.09 487.64 0.00 0.00 4098.42 1631.57 14417.92 00:38:22.059 0 00:38:22.059 19:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:22.059 19:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:22.059 19:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:22.059 | .driver_specific 00:38:22.059 | .nvme_error 00:38:22.059 | .status_code 00:38:22.059 | .command_transient_transport_error' 00:38:22.059 19:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 251 > 0 )) 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3170949 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3170949 ']' 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3170949 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3170949 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3170949' 00:38:22.321 killing process with pid 3170949 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3170949 00:38:22.321 Received shutdown signal, test time was about 2.000000 seconds 00:38:22.321 00:38:22.321 Latency(us) 00:38:22.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.321 =================================================================================================================== 00:38:22.321 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:22.321 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3170949 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3171641 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3171641 /var/tmp/bperf.sock 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3171641 ']' 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:22.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:22.891 19:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:22.891 [2024-07-22 19:42:41.752101] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:22.891 [2024-07-22 19:42:41.752222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171641 ] 00:38:22.891 EAL: No free 2048 kB hugepages reported on node 1 00:38:23.151 [2024-07-22 19:42:41.874971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.151 [2024-07-22 19:42:42.010814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:23.722 19:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:24.292 nvme0n1 00:38:24.292 19:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:24.292 19:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:24.292 19:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:24.292 19:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:24.292 19:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:24.292 19:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:24.292 Running I/O for 2 seconds... 00:38:24.293 [2024-07-22 19:42:43.141902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:38:24.293 [2024-07-22 19:42:43.143733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.293 [2024-07-22 19:42:43.143767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:24.293 [2024-07-22 19:42:43.153988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:38:24.293 [2024-07-22 19:42:43.155370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.293 [2024-07-22 19:42:43.155394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:24.293 [2024-07-22 19:42:43.168491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:38:24.293 [2024-07-22 19:42:43.170281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.293 [2024-07-22 19:42:43.170309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:24.293 [2024-07-22 19:42:43.179073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:38:24.293 [2024-07-22 19:42:43.180139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.293 [2024-07-22 19:42:43.180161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:24.293 [2024-07-22 19:42:43.195697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:38:24.293 [2024-07-22 19:42:43.197785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.293 [2024-07-22 19:42:43.197815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:24.293 [2024-07-22 19:42:43.206692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.293 [2024-07-22 19:42:43.208232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.293 [2024-07-22 19:42:43.208253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:24.293 [2024-07-22 19:42:43.220664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.293 [2024-07-22 19:42:43.222217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.293 [2024-07-22 19:42:43.222240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.293 [2024-07-22 19:42:43.233794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.293 [2024-07-22 19:42:43.235324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.293 [2024-07-22 19:42:43.235346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.246936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.248490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.248511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.260186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.261742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.261763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.273315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.274861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.274882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.286449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.288007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.288029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.299574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.301128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.301150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.312723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.314274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.314294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.325847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.327404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.327425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.338972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.340501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.340522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.352113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.353666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.353687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.365227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.366734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.553 [2024-07-22 19:42:43.366755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.553 [2024-07-22 19:42:43.378318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.553 [2024-07-22 19:42:43.379869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.379890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.554 [2024-07-22 19:42:43.391431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.554 [2024-07-22 19:42:43.392982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.393007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.554 [2024-07-22 19:42:43.404552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.554 [2024-07-22 19:42:43.406098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.406120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.554 [2024-07-22 19:42:43.417689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.554 [2024-07-22 19:42:43.419240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.419261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.554 [2024-07-22 19:42:43.430802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.554 [2024-07-22 19:42:43.432359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.432381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.554 [2024-07-22 19:42:43.443896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.554 [2024-07-22 19:42:43.445456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.445477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.554 [2024-07-22 19:42:43.456988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.554 [2024-07-22 19:42:43.458504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.458525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.554 [2024-07-22 19:42:43.470085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.554 [2024-07-22 19:42:43.471628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.471650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.554 [2024-07-22 19:42:43.483184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.554 [2024-07-22 19:42:43.484745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.484766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.554 [2024-07-22 19:42:43.496298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.554 [2024-07-22 19:42:43.497838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.554 [2024-07-22 19:42:43.497860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.814 [2024-07-22 19:42:43.509416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.814 [2024-07-22 19:42:43.510972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.814 [2024-07-22 19:42:43.510992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.814 [2024-07-22 19:42:43.522528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.814 [2024-07-22 19:42:43.524071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.814 [2024-07-22 19:42:43.524092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.814 [2024-07-22 19:42:43.535671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.814 [2024-07-22 19:42:43.537221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.814 [2024-07-22 19:42:43.537242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.814 [2024-07-22 19:42:43.548760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.814 [2024-07-22 19:42:43.550303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.814 [2024-07-22 19:42:43.550324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.814 [2024-07-22 19:42:43.561871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.814 [2024-07-22 19:42:43.563419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.814 [2024-07-22 19:42:43.563440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.814 [2024-07-22 19:42:43.574973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.814 [2024-07-22 19:42:43.576494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.814 [2024-07-22 19:42:43.576515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.814 [2024-07-22 19:42:43.588114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.814 [2024-07-22 19:42:43.589668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.589689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.601224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.602769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.602791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.614324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.615869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.615890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.627453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.628997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.629019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.640578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.642135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.642156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.653684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.655230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.655250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.666797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.668330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.668351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.679916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.681480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.681501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.693029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.694582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.694604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.706166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.707720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.707740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.719294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.720841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.720862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.732425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.733971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.733998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.745566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.747118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.747139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:24.815 [2024-07-22 19:42:43.758892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:24.815 [2024-07-22 19:42:43.760460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.815 [2024-07-22 19:42:43.760482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.076 [2024-07-22 19:42:43.772038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.076 [2024-07-22 19:42:43.773580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.076 [2024-07-22 19:42:43.773601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.076 [2024-07-22 19:42:43.785162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.076 [2024-07-22 19:42:43.786669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.076 [2024-07-22 19:42:43.786690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.076 [2024-07-22 19:42:43.798312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.076 [2024-07-22 19:42:43.799859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.076 [2024-07-22 19:42:43.799881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.076 [2024-07-22 19:42:43.811517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.076 [2024-07-22 19:42:43.813045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.076 [2024-07-22 19:42:43.813066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.076 [2024-07-22 19:42:43.824634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.076 [2024-07-22 19:42:43.826177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.076 [2024-07-22 19:42:43.826198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.076 [2024-07-22 19:42:43.837775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.076 [2024-07-22 19:42:43.839325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.076 [2024-07-22 19:42:43.839345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.076 [2024-07-22 19:42:43.850922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.076 [2024-07-22 19:42:43.852447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.076 [2024-07-22 19:42:43.852468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.076 [2024-07-22 19:42:43.864040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.076 [2024-07-22 19:42:43.865592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.076 [2024-07-22 19:42:43.865614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.076 [2024-07-22 19:42:43.877165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.076 [2024-07-22 19:42:43.878720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.076 [2024-07-22 19:42:43.878741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:43.890293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:43.891832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:43.891854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:43.903394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:43.904933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:43.904955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:43.916557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:43.918105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:43.918126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:43.929676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:43.931192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:43.931219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:43.942796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:43.944320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:43.944341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:43.955925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:43.957480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:43.957505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:43.969055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:43.970614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:43.970636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:43.982164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:43.983711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:43.983732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:43.995324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:43.996871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:43.996892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:44.008459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:44.010003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:44.010024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.077 [2024-07-22 19:42:44.021592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.077 [2024-07-22 19:42:44.023143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.077 [2024-07-22 19:42:44.023165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.034712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.036256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.036283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.047861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.049414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.049436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.060998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.062549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.062570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.074129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.075692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.075713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.087242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.088786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.088808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.100348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.101899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.101920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.113496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.115036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.115057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.126621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.128147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.128169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.139743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.141268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.141289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.152855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.154397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.154417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.165951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.167481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.167502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.179077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.180636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.180657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.192216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.193759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.193781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.205330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.206898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.206920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.218483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.220025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.220047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.231609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.233153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.233174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.244739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.246287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.246308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.257876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.259542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.259562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.271120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.272668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.272689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.338 [2024-07-22 19:42:44.284252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.338 [2024-07-22 19:42:44.285804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.338 [2024-07-22 19:42:44.285826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.599 [2024-07-22 19:42:44.297394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.599 [2024-07-22 19:42:44.298950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.599 [2024-07-22 19:42:44.298971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.599 [2024-07-22 19:42:44.310545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.599 [2024-07-22 19:42:44.312090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.599 [2024-07-22 19:42:44.312111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.599 [2024-07-22 19:42:44.323655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.599 [2024-07-22 19:42:44.325206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.599 [2024-07-22 19:42:44.325228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.336793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.338335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.338356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.349929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.351484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.351505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.363077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.364650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.364672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.376227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.377775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.377796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.389379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.390918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.390940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.402497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.404049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.404070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.415636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.417184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.417210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.428773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.430320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.430341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.441964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.443527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.443547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.455083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.456626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.456648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.468174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.469691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.469712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.481304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.482857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.482878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.494444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.495996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.496017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.507569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.509116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.509137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.520676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.522219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.522244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.533764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.535308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.535329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.600 [2024-07-22 19:42:44.546875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.600 [2024-07-22 19:42:44.548420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.600 [2024-07-22 19:42:44.548441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.861 [2024-07-22 19:42:44.560016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.861 [2024-07-22 19:42:44.561570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.861 [2024-07-22 19:42:44.561590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.861 [2024-07-22 19:42:44.573131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.861 [2024-07-22 19:42:44.574684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.861 [2024-07-22 19:42:44.574704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.861 [2024-07-22 19:42:44.586261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.861 [2024-07-22 19:42:44.587819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.861 [2024-07-22 19:42:44.587841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.861 [2024-07-22 19:42:44.599353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.861 [2024-07-22 19:42:44.600857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.861 [2024-07-22 19:42:44.600878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.861 [2024-07-22 19:42:44.612463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.861 [2024-07-22 19:42:44.614010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.861 [2024-07-22 19:42:44.614032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.625582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.627089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.627110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.638717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.640273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.640294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.651819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.653365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.653386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.664938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.666501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.666523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.678025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.679566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.679587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.691153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.692709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.692730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.704301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.705845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.705866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.717422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.718966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.718988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.730524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.732066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.732088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.743652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.745196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.745220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.756950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.758498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.758519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.770102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.771650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.771671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.783226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.784776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.784797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.796324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.797866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.797887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.862 [2024-07-22 19:42:44.809411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:25.862 [2024-07-22 19:42:44.810964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.862 [2024-07-22 19:42:44.810985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.822608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.824147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.824167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.835703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.837252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.837273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.848844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.850391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.850412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.861954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.863505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.863526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.875087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.876637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.876666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.888209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.889755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.889776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.901330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.902876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.902897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.914424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.915967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.915987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.927532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.929073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.929094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.940618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.942170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.942191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.953741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.955287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.955309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.966854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.968404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.968424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.979966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.981494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.981515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:44.993097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:44.994648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:44.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:45.006216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:45.007756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:45.007777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:45.019319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.123 [2024-07-22 19:42:45.020876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.123 [2024-07-22 19:42:45.020897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.123 [2024-07-22 19:42:45.032407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.124 [2024-07-22 19:42:45.033954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.124 [2024-07-22 19:42:45.033975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.124 [2024-07-22 19:42:45.045497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.124 [2024-07-22 19:42:45.047046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.124 [2024-07-22 19:42:45.047067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.124 [2024-07-22 19:42:45.058614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.124 [2024-07-22 19:42:45.060160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.124 [2024-07-22 19:42:45.060181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.124 [2024-07-22 19:42:45.071716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.124 [2024-07-22 19:42:45.073262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.124 [2024-07-22 19:42:45.073283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.384 [2024-07-22 19:42:45.084864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.384 [2024-07-22 19:42:45.086434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.384 [2024-07-22 19:42:45.086458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.384 [2024-07-22 19:42:45.097949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.384 [2024-07-22 19:42:45.099502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.384 [2024-07-22 19:42:45.099523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.384 [2024-07-22 19:42:45.111091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.384 [2024-07-22 19:42:45.112623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.384 [2024-07-22 19:42:45.112644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.384 [2024-07-22 19:42:45.124205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:26.384 [2024-07-22 19:42:45.125747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:26.384 [2024-07-22 19:42:45.125768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:26.384 00:38:26.384 Latency(us) 00:38:26.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.384 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.384 nvme0n1 : 2.00 19428.42 75.89 0.00 0.00 6578.96 2471.25 15073.28 00:38:26.384 =================================================================================================================== 00:38:26.384 Total : 19428.42 75.89 0.00 0.00 6578.96 2471.25 15073.28 00:38:26.384 0 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:26.384 | .driver_specific 00:38:26.384 | .nvme_error 00:38:26.384 | .status_code 00:38:26.384 | .command_transient_transport_error' 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3171641 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3171641 ']' 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3171641 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:26.384 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3171641 00:38:26.644 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:26.644 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:26.644 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3171641' 00:38:26.644 killing process with pid 3171641 00:38:26.644 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3171641 00:38:26.644 Received shutdown signal, test time was about 2.000000 seconds 00:38:26.644 00:38:26.644 Latency(us) 00:38:26.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.644 =================================================================================================================== 00:38:26.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:26.644 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3171641 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3172458 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3172458 /var/tmp/bperf.sock 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3172458 ']' 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:27.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:27.231 19:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:27.231 [2024-07-22 19:42:45.955556] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:27.231 [2024-07-22 19:42:45.955671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172458 ] 00:38:27.231 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:27.231 Zero copy mechanism will not be used. 00:38:27.231 EAL: No free 2048 kB hugepages reported on node 1 00:38:27.231 [2024-07-22 19:42:46.075878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.492 [2024-07-22 19:42:46.211656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.753 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:27.753 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:27.753 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:27.753 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:28.013 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:28.013 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:28.013 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.013 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:28.013 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:28.013 19:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:28.273 nvme0n1 00:38:28.273 19:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:28.273 19:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:28.273 19:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.273 19:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:28.273 19:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:28.273 19:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:28.273 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:28.273 Zero copy mechanism will not be used. 00:38:28.273 Running I/O for 2 seconds... 00:38:28.273 [2024-07-22 19:42:47.225995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.226330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.226363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.239401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.239793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.239818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.250647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.251017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.251041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.260902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.261289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.261311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.270440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.270804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.270833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.279318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.279664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.279686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.287320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.287651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.287674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.295766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.296009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.296031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.304496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.304829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.304851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.313042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.313404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.313425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.321752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.321994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.322015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.328091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.328440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.328462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.335111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.335468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.335490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.343495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.534 [2024-07-22 19:42:47.343771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.534 [2024-07-22 19:42:47.343796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.534 [2024-07-22 19:42:47.351764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.352106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.352128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.360545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.360911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.360933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.370258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.370608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.370629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.378534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.378783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.378805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.388146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.388433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.388455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.398472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.398813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.398834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.409164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.409544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.409566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.419162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.419246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.419266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.430127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.430478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.430500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.441620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.441969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.441991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.452354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.452466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.452486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.464271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.464650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.464671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.474384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.474773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.535 [2024-07-22 19:42:47.485904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.535 [2024-07-22 19:42:47.486161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.535 [2024-07-22 19:42:47.486183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.497198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.497593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.497615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.506438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.506791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.506812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.515470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.515812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.515838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.525606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.525946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.525967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.534617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.534982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.535003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.543070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.543434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.543456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.551678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.552011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.552032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.562412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.562747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.562769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.570538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.570787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.570809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.578214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.578465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.578486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.588164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.588553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.588574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.598264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.598639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.598661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.611244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.611639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.611660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.624125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.624529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.624551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.635432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.635787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.635808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.647675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.648069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.648091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.660558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.660688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.660708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.673225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.673578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.673599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.685379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.685702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.685723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.796 [2024-07-22 19:42:47.697435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.796 [2024-07-22 19:42:47.697835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.796 [2024-07-22 19:42:47.697860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.797 [2024-07-22 19:42:47.709981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.797 [2024-07-22 19:42:47.710436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.797 [2024-07-22 19:42:47.710458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.797 [2024-07-22 19:42:47.722460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.797 [2024-07-22 19:42:47.722913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.797 [2024-07-22 19:42:47.722934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:28.797 [2024-07-22 19:42:47.734938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.797 [2024-07-22 19:42:47.735345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.797 [2024-07-22 19:42:47.735366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:28.797 [2024-07-22 19:42:47.746776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:28.797 [2024-07-22 19:42:47.747141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.797 [2024-07-22 19:42:47.747163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.057 [2024-07-22 19:42:47.758018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.057 [2024-07-22 19:42:47.758439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.057 [2024-07-22 19:42:47.758461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.057 [2024-07-22 19:42:47.770215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.057 [2024-07-22 19:42:47.770484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.057 [2024-07-22 19:42:47.770505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.057 [2024-07-22 19:42:47.781811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.057 [2024-07-22 19:42:47.782238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.057 [2024-07-22 19:42:47.782260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.057 [2024-07-22 19:42:47.794134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.057 [2024-07-22 19:42:47.794528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.057 [2024-07-22 19:42:47.794550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.057 [2024-07-22 19:42:47.806560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.057 [2024-07-22 19:42:47.806847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.057 [2024-07-22 19:42:47.806869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.057 [2024-07-22 19:42:47.818824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.057 [2024-07-22 19:42:47.819228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.057 [2024-07-22 19:42:47.819250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.057 [2024-07-22 19:42:47.831081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.057 [2024-07-22 19:42:47.831584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.057 [2024-07-22 19:42:47.831605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.057 [2024-07-22 19:42:47.842428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.057 [2024-07-22 19:42:47.842813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.057 [2024-07-22 19:42:47.842834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.057 [2024-07-22 19:42:47.853960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.057 [2024-07-22 19:42:47.854419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.057 [2024-07-22 19:42:47.854441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.866169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.866541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.866562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.877422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.877881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.877903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.888761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.889212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.889235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.899920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.900247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.900269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.910581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.910953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.910974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.920532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.920798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.920818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.929478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.929741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.929763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.938108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.938340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.938367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.948271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.948597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.948618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.957640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.957930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.957952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.964328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.964555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.964576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.970354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.970576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.970596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.976732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.977065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.977086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.983095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.983323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.983343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.989460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.989681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.989702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:47.996547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:47.996831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:47.996853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.058 [2024-07-22 19:42:48.005311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.058 [2024-07-22 19:42:48.005637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.058 [2024-07-22 19:42:48.005659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.014715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.015019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.015041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.024069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.024486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.024507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.032737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.033083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.033105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.041128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.041357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.041378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.050496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.050845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.050866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.059371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.059726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.059747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.068030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.068380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.068401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.077758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.078155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.078177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.087430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.087781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.087802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.094671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.094932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.094952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.103031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.103459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.103481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.110910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.111270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.111291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.120184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.120538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.120559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.129583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.129934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.129955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.138419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.138780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.138801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.147564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.147830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.147851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.156290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.156516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.156537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.165824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.166111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.166133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.175301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.175675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.175697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.182222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.182449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.182469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.189223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.189577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.189597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.194287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.194509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.194530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.200062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.200416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.200437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.205593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.205811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.205832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.214703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.215024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.215045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.223167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.223415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.223436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.231531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.231923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.231944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.241338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.241561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.241582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.249971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.250359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.250381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.258834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.259222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.259246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.319 [2024-07-22 19:42:48.268807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.319 [2024-07-22 19:42:48.269195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.319 [2024-07-22 19:42:48.269222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.581 [2024-07-22 19:42:48.276498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.581 [2024-07-22 19:42:48.276804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.581 [2024-07-22 19:42:48.276826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.581 [2024-07-22 19:42:48.282666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.581 [2024-07-22 19:42:48.282891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.581 [2024-07-22 19:42:48.282912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.581 [2024-07-22 19:42:48.287871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.581 [2024-07-22 19:42:48.288093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.581 [2024-07-22 19:42:48.288113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.581 [2024-07-22 19:42:48.293590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.581 [2024-07-22 19:42:48.293812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.581 [2024-07-22 19:42:48.293832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.581 [2024-07-22 19:42:48.301346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.581 [2024-07-22 19:42:48.301647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.581 [2024-07-22 19:42:48.301669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.581 [2024-07-22 19:42:48.308646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.581 [2024-07-22 19:42:48.308970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.581 [2024-07-22 19:42:48.308992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.581 [2024-07-22 19:42:48.313928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.581 [2024-07-22 19:42:48.314260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.581 [2024-07-22 19:42:48.314281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.581 [2024-07-22 19:42:48.318719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.581 [2024-07-22 19:42:48.318939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.581 [2024-07-22 19:42:48.318960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.323891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.324110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.324130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.328814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.329033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.329054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.334716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.334926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.334946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.339280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.339488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.339509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.343985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.344191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.344217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.351729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.351950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.351971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.356756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.357023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.357043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.362452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.362749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.362777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.367217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.367424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.367445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.371551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.371758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.371778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.377537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.377745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.377766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.382931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.383149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.383169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.390988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.391353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.391375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.400219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.400567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.400589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.408617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.408943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.408964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.417531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.417746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.417774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.426070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.426392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.426413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.434820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.435246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.435267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.444084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.444301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.444321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.453643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.453855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.453876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.462408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.462802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.462823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.471763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.472142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.472164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.481362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.481697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.481718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.491094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.491319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.491340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.501215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.501433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.501457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.510327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.510716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.510737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.521124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.521463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.521484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.582 [2024-07-22 19:42:48.529403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.582 [2024-07-22 19:42:48.529782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.582 [2024-07-22 19:42:48.529804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.537443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.537710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.537731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.545888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.546134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.546155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.552509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.552835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.552856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.559282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.559489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.559510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.564149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.564362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.564383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.572498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.572809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.572830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.577748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.577954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.577975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.583471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.583773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.583795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.590021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.590438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.590460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.599243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.599638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.599659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.608900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.609308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.609330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.615666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.615953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.615974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.621511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.621724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.621744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.627898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.628312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.628337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.634748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.634958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.634979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.641147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.641518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.641539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.649364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.649574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.649598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.654673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.654900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.654921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.661934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.662280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.662302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.670433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.670645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.670666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.677409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.677622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.677643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.684373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.684585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.684606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.690474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.690786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.690808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.696344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.696554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.696575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.701080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.701416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.701437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.705929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.706264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.706285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.711350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.711559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.711579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.715630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.843 [2024-07-22 19:42:48.715838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.843 [2024-07-22 19:42:48.715859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.843 [2024-07-22 19:42:48.720038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.720255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.720275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.724623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.724843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.724863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.729081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.729380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.729402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.735656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.735866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.735886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.742230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.742670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.742691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.748552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.748760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.748781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.753162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.753378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.753398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.758282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.758491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.758512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.762685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.762894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.762915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.768115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.768450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.768472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.773228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.773435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.773455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.777529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.777742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.777763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.781796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.782002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.782022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.788386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:29.844 [2024-07-22 19:42:48.788594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.844 [2024-07-22 19:42:48.788614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.844 [2024-07-22 19:42:48.794762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.104 [2024-07-22 19:42:48.795113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.104 [2024-07-22 19:42:48.795134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.104 [2024-07-22 19:42:48.800613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.104 [2024-07-22 19:42:48.800916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.104 [2024-07-22 19:42:48.800937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.104 [2024-07-22 19:42:48.808564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.104 [2024-07-22 19:42:48.808832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.104 [2024-07-22 19:42:48.808858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.104 [2024-07-22 19:42:48.813605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.104 [2024-07-22 19:42:48.813811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.104 [2024-07-22 19:42:48.813832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.104 [2024-07-22 19:42:48.820936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.104 [2024-07-22 19:42:48.821284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.104 [2024-07-22 19:42:48.821305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.828281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.828521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.828542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.834148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.834358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.834379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.840379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.840586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.840607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.846890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.847103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.847124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.856694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.857050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.857078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.866299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.866725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.866746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.876131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.876540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.876562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.885705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.886116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.886137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.895360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.895729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.895750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.903033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.903262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.903285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.909375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.909729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.909751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.915181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.915401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.915422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.921992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.922388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.922410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.931115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.931459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.931481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.938341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.938552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.938573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.946894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.947109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.947130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.954363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.954732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.954753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.962162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.962384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.962405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.967551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.967765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.967785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.972951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.973265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.973286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.981772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.981988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.982008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.989422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.989755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.989777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:48.998108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:48.998419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:48.998440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:49.007288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:49.007652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:49.007673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:49.016343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:49.016637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:49.016658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:49.025513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.105 [2024-07-22 19:42:49.025728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.105 [2024-07-22 19:42:49.025749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.105 [2024-07-22 19:42:49.033604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.106 [2024-07-22 19:42:49.033881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.106 [2024-07-22 19:42:49.033906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.106 [2024-07-22 19:42:49.042941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.106 [2024-07-22 19:42:49.043318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.106 [2024-07-22 19:42:49.043340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.106 [2024-07-22 19:42:49.053734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.106 [2024-07-22 19:42:49.053976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.106 [2024-07-22 19:42:49.053996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.062000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.062224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.062245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.068402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.068619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.068639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.074490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.074707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.074727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.079435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.079646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.079666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.085689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.085930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.085951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.093195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.093566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.093587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.101757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.101970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.101991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.109136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.109448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.109469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.116820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.117036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.117056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.125689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.125954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.125975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.133525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.133828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.133850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.142766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.142991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.367 [2024-07-22 19:42:49.143012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.367 [2024-07-22 19:42:49.151172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.367 [2024-07-22 19:42:49.151419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.151439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.158141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.158384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.158404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.163053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.163269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.163293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.167496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.167705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.167726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.171919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.172131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.172151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.176229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.176437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.176458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.180476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.180684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.180705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.184792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.184999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.185019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.189031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.189244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.189265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.193256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.193465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.193485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.197483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.197690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.197710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.202058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.202277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.202297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.206654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.206861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.206882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.211057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.211269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.211290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.368 [2024-07-22 19:42:49.215450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:30.368 [2024-07-22 19:42:49.215582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.368 [2024-07-22 19:42:49.215602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.368 00:38:30.368 Latency(us) 00:38:30.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.368 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:30.368 nvme0n1 : 2.00 3840.02 480.00 0.00 0.00 4161.03 2007.04 13325.65 00:38:30.368 =================================================================================================================== 00:38:30.368 Total : 3840.02 480.00 0.00 0.00 4161.03 2007.04 13325.65 00:38:30.368 0 00:38:30.368 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:30.368 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:30.368 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:30.368 | .driver_specific 00:38:30.368 | .nvme_error 00:38:30.368 | .status_code 00:38:30.368 | .command_transient_transport_error' 00:38:30.368 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 248 > 0 )) 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3172458 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3172458 ']' 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3172458 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3172458 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3172458' 00:38:30.630 killing process with pid 3172458 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3172458 00:38:30.630 Received shutdown signal, test time was about 2.000000 seconds 00:38:30.630 00:38:30.630 Latency(us) 00:38:30.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.630 =================================================================================================================== 00:38:30.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:30.630 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3172458 00:38:31.202 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3169911 00:38:31.202 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3169911 ']' 00:38:31.202 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3169911 00:38:31.202 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:31.202 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:31.202 19:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3169911 00:38:31.202 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:31.202 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:31.202 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3169911' 00:38:31.202 killing process with pid 3169911 00:38:31.202 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3169911 00:38:31.202 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3169911 00:38:32.145 00:38:32.145 real 0m18.722s 00:38:32.145 user 0m35.838s 00:38:32.145 sys 0m3.483s 00:38:32.145 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:32.145 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:32.145 ************************************ 00:38:32.146 END TEST nvmf_digest_error 00:38:32.146 ************************************ 00:38:32.146 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:38:32.146 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:38:32.146 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:38:32.146 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:32.146 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:38:32.146 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:32.146 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:38:32.146 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:32.146 19:42:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:32.146 rmmod nvme_tcp 00:38:32.146 rmmod nvme_fabrics 00:38:32.146 rmmod nvme_keyring 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3169911 ']' 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3169911 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3169911 ']' 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3169911 00:38:32.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3169911) - No such process 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3169911 is not found' 00:38:32.146 Process with pid 3169911 is not found 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:32.146 19:42:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:34.692 00:38:34.692 real 0m47.265s 00:38:34.692 user 1m14.422s 00:38:34.692 sys 0m12.250s 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:34.692 ************************************ 00:38:34.692 END TEST nvmf_digest 00:38:34.692 ************************************ 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.692 ************************************ 00:38:34.692 START TEST nvmf_bdevperf 00:38:34.692 ************************************ 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:34.692 * Looking for test storage... 00:38:34.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:38:34.692 19:42:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:38:41.348 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:41.349 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:41.349 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:41.349 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:41.349 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:41.349 19:42:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:41.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:38:41.349 00:38:41.349 --- 10.0.0.2 ping statistics --- 00:38:41.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.349 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:41.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:38:41.349 00:38:41.349 --- 10.0.0.1 ping statistics --- 00:38:41.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.349 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3177445 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3177445 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3177445 ']' 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:41.349 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.350 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:41.350 19:43:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:41.611 [2024-07-22 19:43:00.381007] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:41.611 [2024-07-22 19:43:00.381106] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.611 EAL: No free 2048 kB hugepages reported on node 1 00:38:41.611 [2024-07-22 19:43:00.523176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:41.873 [2024-07-22 19:43:00.757400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.873 [2024-07-22 19:43:00.757469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.873 [2024-07-22 19:43:00.757485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.873 [2024-07-22 19:43:00.757496] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.873 [2024-07-22 19:43:00.757507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.873 [2024-07-22 19:43:00.757681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:41.873 [2024-07-22 19:43:00.757808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.873 [2024-07-22 19:43:00.757841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.445 [2024-07-22 19:43:01.160532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.445 Malloc0 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.445 [2024-07-22 19:43:01.266016] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:42.445 { 00:38:42.445 "params": { 00:38:42.445 "name": "Nvme$subsystem", 00:38:42.445 "trtype": "$TEST_TRANSPORT", 00:38:42.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:42.445 "adrfam": "ipv4", 00:38:42.445 "trsvcid": "$NVMF_PORT", 00:38:42.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:42.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:42.445 "hdgst": ${hdgst:-false}, 00:38:42.445 "ddgst": ${ddgst:-false} 00:38:42.445 }, 00:38:42.445 "method": "bdev_nvme_attach_controller" 00:38:42.445 } 00:38:42.445 EOF 00:38:42.445 )") 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:42.445 19:43:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:42.445 "params": { 00:38:42.445 "name": "Nvme1", 00:38:42.445 "trtype": "tcp", 00:38:42.445 "traddr": "10.0.0.2", 00:38:42.445 "adrfam": "ipv4", 00:38:42.445 "trsvcid": "4420", 00:38:42.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:42.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:42.445 "hdgst": false, 00:38:42.445 "ddgst": false 00:38:42.445 }, 00:38:42.445 "method": "bdev_nvme_attach_controller" 00:38:42.445 }' 00:38:42.445 [2024-07-22 19:43:01.348065] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:42.445 [2024-07-22 19:43:01.348143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3177691 ] 00:38:42.445 EAL: No free 2048 kB hugepages reported on node 1 00:38:42.706 [2024-07-22 19:43:01.444892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.706 [2024-07-22 19:43:01.621024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.279 Running I/O for 1 seconds... 00:38:44.223 00:38:44.223 Latency(us) 00:38:44.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.223 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:44.223 Verification LBA range: start 0x0 length 0x4000 00:38:44.223 Nvme1n1 : 1.00 8209.40 32.07 0.00 0.00 15521.05 1884.16 13161.81 00:38:44.223 =================================================================================================================== 00:38:44.223 Total : 8209.40 32.07 0.00 0.00 15521.05 1884.16 13161.81 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3178054 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:44.806 { 00:38:44.806 "params": { 00:38:44.806 "name": "Nvme$subsystem", 00:38:44.806 "trtype": "$TEST_TRANSPORT", 00:38:44.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.806 "adrfam": "ipv4", 00:38:44.806 "trsvcid": "$NVMF_PORT", 00:38:44.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.806 "hdgst": ${hdgst:-false}, 00:38:44.806 "ddgst": ${ddgst:-false} 00:38:44.806 }, 00:38:44.806 "method": "bdev_nvme_attach_controller" 00:38:44.806 } 00:38:44.806 EOF 00:38:44.806 )") 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:44.806 19:43:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:44.806 "params": { 00:38:44.806 "name": "Nvme1", 00:38:44.806 "trtype": "tcp", 00:38:44.806 "traddr": "10.0.0.2", 00:38:44.806 "adrfam": "ipv4", 00:38:44.806 "trsvcid": "4420", 00:38:44.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:44.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:44.806 "hdgst": false, 00:38:44.806 "ddgst": false 00:38:44.806 }, 00:38:44.806 "method": "bdev_nvme_attach_controller" 00:38:44.806 }' 00:38:44.806 [2024-07-22 19:43:03.757182] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:44.806 [2024-07-22 19:43:03.757300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3178054 ] 00:38:45.067 EAL: No free 2048 kB hugepages reported on node 1 00:38:45.067 [2024-07-22 19:43:03.868154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.327 [2024-07-22 19:43:04.044800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.588 Running I/O for 15 seconds... 00:38:48.140 19:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3177445 00:38:48.140 19:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:48.140 [2024-07-22 19:43:06.702686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.140 [2024-07-22 19:43:06.702753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.140 [2024-07-22 19:43:06.702784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.140 [2024-07-22 19:43:06.702799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.140 [2024-07-22 19:43:06.702818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.140 [2024-07-22 19:43:06.702828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.140 [2024-07-22 19:43:06.702841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.140 [2024-07-22 19:43:06.702853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.702868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.702879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.702895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.702905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.702920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.702933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.702946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.702959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.702972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.702988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.141 [2024-07-22 19:43:06.703515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.141 [2024-07-22 19:43:06.703539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.141 [2024-07-22 19:43:06.703563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.141 [2024-07-22 19:43:06.703586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.141 [2024-07-22 19:43:06.703610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.141 [2024-07-22 19:43:06.703634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.141 [2024-07-22 19:43:06.703656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.141 [2024-07-22 19:43:06.703678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.141 [2024-07-22 19:43:06.703702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.141 [2024-07-22 19:43:06.703714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.703989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.703999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.142 [2024-07-22 19:43:06.704490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.142 [2024-07-22 19:43:06.704501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.704984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.704997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.143 [2024-07-22 19:43:06.705230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.143 [2024-07-22 19:43:06.705240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:48.144 [2024-07-22 19:43:06.705651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.144 [2024-07-22 19:43:06.705673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.144 [2024-07-22 19:43:06.705696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.144 [2024-07-22 19:43:06.705718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:48.144 [2024-07-22 19:43:06.705741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.705754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389080 is same with the state(5) to be set 00:38:48.144 [2024-07-22 19:43:06.705773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:48.144 [2024-07-22 19:43:06.705782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:48.144 [2024-07-22 19:43:06.705796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55224 len:8 PRP1 0x0 PRP2 0x0 00:38:48.144 [2024-07-22 19:43:06.705808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:48.144 [2024-07-22 19:43:06.706014] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000389080 was disconnected and freed. reset controller. 00:38:48.144 [2024-07-22 19:43:06.709830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.144 [2024-07-22 19:43:06.709918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.144 [2024-07-22 19:43:06.710837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.144 [2024-07-22 19:43:06.710862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.144 [2024-07-22 19:43:06.710876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.144 [2024-07-22 19:43:06.711122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.144 [2024-07-22 19:43:06.711369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.144 [2024-07-22 19:43:06.711388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.144 [2024-07-22 19:43:06.711401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.144 [2024-07-22 19:43:06.715213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.144 [2024-07-22 19:43:06.724434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.144 [2024-07-22 19:43:06.725072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.144 [2024-07-22 19:43:06.725119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.144 [2024-07-22 19:43:06.725135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.144 [2024-07-22 19:43:06.725422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.144 [2024-07-22 19:43:06.725668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.144 [2024-07-22 19:43:06.725681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.144 [2024-07-22 19:43:06.725693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.144 [2024-07-22 19:43:06.729510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.144 [2024-07-22 19:43:06.738722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.144 [2024-07-22 19:43:06.739485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.144 [2024-07-22 19:43:06.739531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.144 [2024-07-22 19:43:06.739546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.144 [2024-07-22 19:43:06.739820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.144 [2024-07-22 19:43:06.740064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.144 [2024-07-22 19:43:06.740078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.145 [2024-07-22 19:43:06.740093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.145 [2024-07-22 19:43:06.743912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.145 [2024-07-22 19:43:06.752904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.145 [2024-07-22 19:43:06.753674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.145 [2024-07-22 19:43:06.753720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.145 [2024-07-22 19:43:06.753735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.145 [2024-07-22 19:43:06.754008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.145 [2024-07-22 19:43:06.754263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.145 [2024-07-22 19:43:06.754278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.145 [2024-07-22 19:43:06.754289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.145 [2024-07-22 19:43:06.758319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.145 [2024-07-22 19:43:06.767091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.145 [2024-07-22 19:43:06.767858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.145 [2024-07-22 19:43:06.767905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.145 [2024-07-22 19:43:06.767921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.145 [2024-07-22 19:43:06.768194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.145 [2024-07-22 19:43:06.768451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.145 [2024-07-22 19:43:06.768465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.145 [2024-07-22 19:43:06.768476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.145 [2024-07-22 19:43:06.772304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.145 [2024-07-22 19:43:06.781320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.145 [2024-07-22 19:43:06.782075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.145 [2024-07-22 19:43:06.782120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.145 [2024-07-22 19:43:06.782135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.145 [2024-07-22 19:43:06.782417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.145 [2024-07-22 19:43:06.782662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.145 [2024-07-22 19:43:06.782676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.145 [2024-07-22 19:43:06.782687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.145 [2024-07-22 19:43:06.786503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.145 [2024-07-22 19:43:06.795494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.145 [2024-07-22 19:43:06.796250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.145 [2024-07-22 19:43:06.796297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.145 [2024-07-22 19:43:06.796313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.145 [2024-07-22 19:43:06.796589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.145 [2024-07-22 19:43:06.796833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.145 [2024-07-22 19:43:06.796846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.145 [2024-07-22 19:43:06.796857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.145 [2024-07-22 19:43:06.800688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.145 [2024-07-22 19:43:06.809682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.145 [2024-07-22 19:43:06.810300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.145 [2024-07-22 19:43:06.810326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.145 [2024-07-22 19:43:06.810337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.145 [2024-07-22 19:43:06.810579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.145 [2024-07-22 19:43:06.810818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.145 [2024-07-22 19:43:06.810830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.145 [2024-07-22 19:43:06.810840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.145 [2024-07-22 19:43:06.814649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.145 [2024-07-22 19:43:06.823856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.145 [2024-07-22 19:43:06.824534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.145 [2024-07-22 19:43:06.824581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.145 [2024-07-22 19:43:06.824596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.145 [2024-07-22 19:43:06.824868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.145 [2024-07-22 19:43:06.825114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.145 [2024-07-22 19:43:06.825128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.145 [2024-07-22 19:43:06.825138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.145 [2024-07-22 19:43:06.828960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.145 [2024-07-22 19:43:06.838040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.145 [2024-07-22 19:43:06.838805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.145 [2024-07-22 19:43:06.838851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.145 [2024-07-22 19:43:06.838866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.145 [2024-07-22 19:43:06.839143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.145 [2024-07-22 19:43:06.839400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.145 [2024-07-22 19:43:06.839415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.145 [2024-07-22 19:43:06.839425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.145 [2024-07-22 19:43:06.843241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.145 [2024-07-22 19:43:06.852243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.145 [2024-07-22 19:43:06.852835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.145 [2024-07-22 19:43:06.852881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.145 [2024-07-22 19:43:06.852897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.145 [2024-07-22 19:43:06.853170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.145 [2024-07-22 19:43:06.853425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.145 [2024-07-22 19:43:06.853440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.145 [2024-07-22 19:43:06.853451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.146 [2024-07-22 19:43:06.857260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.146 [2024-07-22 19:43:06.866467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.146 [2024-07-22 19:43:06.867208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.146 [2024-07-22 19:43:06.867255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.146 [2024-07-22 19:43:06.867270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.146 [2024-07-22 19:43:06.867543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.146 [2024-07-22 19:43:06.867788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.146 [2024-07-22 19:43:06.867801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.146 [2024-07-22 19:43:06.867812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.146 [2024-07-22 19:43:06.871628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.146 [2024-07-22 19:43:06.880638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.146 [2024-07-22 19:43:06.881416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.146 [2024-07-22 19:43:06.881462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.146 [2024-07-22 19:43:06.881478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.146 [2024-07-22 19:43:06.881751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.146 [2024-07-22 19:43:06.881996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.146 [2024-07-22 19:43:06.882010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.146 [2024-07-22 19:43:06.882025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.146 [2024-07-22 19:43:06.885845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.146 [2024-07-22 19:43:06.894832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.146 [2024-07-22 19:43:06.895541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.146 [2024-07-22 19:43:06.895588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.146 [2024-07-22 19:43:06.895609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.146 [2024-07-22 19:43:06.895882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.146 [2024-07-22 19:43:06.896133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.146 [2024-07-22 19:43:06.896146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.146 [2024-07-22 19:43:06.896157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.146 [2024-07-22 19:43:06.899971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.146 [2024-07-22 19:43:06.908954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.146 [2024-07-22 19:43:06.909678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.146 [2024-07-22 19:43:06.909724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.146 [2024-07-22 19:43:06.909739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.146 [2024-07-22 19:43:06.910012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.146 [2024-07-22 19:43:06.910267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.146 [2024-07-22 19:43:06.910281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.146 [2024-07-22 19:43:06.910292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.146 [2024-07-22 19:43:06.914100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.146 [2024-07-22 19:43:06.923089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.146 [2024-07-22 19:43:06.923809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.146 [2024-07-22 19:43:06.923854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.146 [2024-07-22 19:43:06.923870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.146 [2024-07-22 19:43:06.924142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.146 [2024-07-22 19:43:06.924397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.146 [2024-07-22 19:43:06.924412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.146 [2024-07-22 19:43:06.924423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.146 [2024-07-22 19:43:06.928234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.146 [2024-07-22 19:43:06.937229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.146 [2024-07-22 19:43:06.937932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.146 [2024-07-22 19:43:06.937978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.146 [2024-07-22 19:43:06.937993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.146 [2024-07-22 19:43:06.938274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.146 [2024-07-22 19:43:06.938520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.146 [2024-07-22 19:43:06.938533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.146 [2024-07-22 19:43:06.938543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.146 [2024-07-22 19:43:06.942357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.146 [2024-07-22 19:43:06.951348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.146 [2024-07-22 19:43:06.951960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.146 [2024-07-22 19:43:06.951985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.146 [2024-07-22 19:43:06.951997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.146 [2024-07-22 19:43:06.952253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.146 [2024-07-22 19:43:06.952495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:06.952508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:06.952518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:06.956318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.147 [2024-07-22 19:43:06.965532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.147 [2024-07-22 19:43:06.966150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.147 [2024-07-22 19:43:06.966173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.147 [2024-07-22 19:43:06.966184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.147 [2024-07-22 19:43:06.966430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.147 [2024-07-22 19:43:06.966670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:06.966683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:06.966693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:06.970498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.147 [2024-07-22 19:43:06.979716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.147 [2024-07-22 19:43:06.980320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.147 [2024-07-22 19:43:06.980343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.147 [2024-07-22 19:43:06.980354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.147 [2024-07-22 19:43:06.980597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.147 [2024-07-22 19:43:06.980837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:06.980848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:06.980858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:06.984671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.147 [2024-07-22 19:43:06.993875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.147 [2024-07-22 19:43:06.994518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.147 [2024-07-22 19:43:06.994564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.147 [2024-07-22 19:43:06.994579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.147 [2024-07-22 19:43:06.994852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.147 [2024-07-22 19:43:06.995098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:06.995110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:06.995121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:06.998937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.147 [2024-07-22 19:43:07.008152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.147 [2024-07-22 19:43:07.008848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.147 [2024-07-22 19:43:07.008873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.147 [2024-07-22 19:43:07.008885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.147 [2024-07-22 19:43:07.009126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.147 [2024-07-22 19:43:07.009372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:07.009386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:07.009396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:07.013195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.147 [2024-07-22 19:43:07.022398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.147 [2024-07-22 19:43:07.023140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.147 [2024-07-22 19:43:07.023186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.147 [2024-07-22 19:43:07.023210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.147 [2024-07-22 19:43:07.023483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.147 [2024-07-22 19:43:07.023728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:07.023742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:07.023757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:07.027574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.147 [2024-07-22 19:43:07.036572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.147 [2024-07-22 19:43:07.037347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.147 [2024-07-22 19:43:07.037394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.147 [2024-07-22 19:43:07.037410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.147 [2024-07-22 19:43:07.037683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.147 [2024-07-22 19:43:07.037928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:07.037941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:07.037953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:07.041772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.147 [2024-07-22 19:43:07.050761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.147 [2024-07-22 19:43:07.051537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.147 [2024-07-22 19:43:07.051584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.147 [2024-07-22 19:43:07.051599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.147 [2024-07-22 19:43:07.051871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.147 [2024-07-22 19:43:07.052116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:07.052129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:07.052139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:07.055958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.147 [2024-07-22 19:43:07.064954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.147 [2024-07-22 19:43:07.065707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.147 [2024-07-22 19:43:07.065754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.147 [2024-07-22 19:43:07.065768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.147 [2024-07-22 19:43:07.066041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.147 [2024-07-22 19:43:07.066297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:07.066311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:07.066322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:07.070130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.147 [2024-07-22 19:43:07.079127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.147 [2024-07-22 19:43:07.079783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.147 [2024-07-22 19:43:07.079829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.147 [2024-07-22 19:43:07.079845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.147 [2024-07-22 19:43:07.080117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.147 [2024-07-22 19:43:07.080373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.147 [2024-07-22 19:43:07.080387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.147 [2024-07-22 19:43:07.080398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.147 [2024-07-22 19:43:07.084211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.411 [2024-07-22 19:43:07.093434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.411 [2024-07-22 19:43:07.094088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.411 [2024-07-22 19:43:07.094113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.411 [2024-07-22 19:43:07.094125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.411 [2024-07-22 19:43:07.094372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.411 [2024-07-22 19:43:07.094612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.411 [2024-07-22 19:43:07.094626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.411 [2024-07-22 19:43:07.094643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.411 [2024-07-22 19:43:07.098446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.411 [2024-07-22 19:43:07.107654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.411 [2024-07-22 19:43:07.108288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.411 [2024-07-22 19:43:07.108334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.411 [2024-07-22 19:43:07.108351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.411 [2024-07-22 19:43:07.108625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.411 [2024-07-22 19:43:07.108870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.411 [2024-07-22 19:43:07.108883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.411 [2024-07-22 19:43:07.108894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.411 [2024-07-22 19:43:07.112717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.411 [2024-07-22 19:43:07.121926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.411 [2024-07-22 19:43:07.122626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.411 [2024-07-22 19:43:07.122672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.411 [2024-07-22 19:43:07.122687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.411 [2024-07-22 19:43:07.122964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.411 [2024-07-22 19:43:07.123221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.411 [2024-07-22 19:43:07.123236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.411 [2024-07-22 19:43:07.123247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.411 [2024-07-22 19:43:07.127052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.411 [2024-07-22 19:43:07.136042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.411 [2024-07-22 19:43:07.136782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.411 [2024-07-22 19:43:07.136828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.411 [2024-07-22 19:43:07.136843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.411 [2024-07-22 19:43:07.137115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.411 [2024-07-22 19:43:07.137371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.411 [2024-07-22 19:43:07.137386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.411 [2024-07-22 19:43:07.137397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.411 [2024-07-22 19:43:07.141214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.411 [2024-07-22 19:43:07.150216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.411 [2024-07-22 19:43:07.150975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.411 [2024-07-22 19:43:07.151021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.411 [2024-07-22 19:43:07.151036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.411 [2024-07-22 19:43:07.151319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.411 [2024-07-22 19:43:07.151564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.411 [2024-07-22 19:43:07.151578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.411 [2024-07-22 19:43:07.151589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.411 [2024-07-22 19:43:07.155405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.411 [2024-07-22 19:43:07.164401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.411 [2024-07-22 19:43:07.165023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.411 [2024-07-22 19:43:07.165048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.165060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.165307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.165547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.165560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.165573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.169387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.178606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.179314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.179360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.179377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.179650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.179895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.179908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.179919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.183740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.192739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.193498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.193544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.193559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.193833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.194077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.194091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.194102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.197919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.206900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.207559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.207586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.207597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.207839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.208078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.208090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.208100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.211903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.221112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.221773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.221795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.221805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.222045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.222290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.222302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.222312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.226112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.235318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.235967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.235990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.236001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.236246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.236486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.236498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.236508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.240310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.249513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.250116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.250138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.250149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.250396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.250636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.250648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.250657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.254464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.263671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.264269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.264292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.264303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.264546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.264785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.264797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.264806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.268612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.277816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.278504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.278550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.278565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.278837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.279082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.279095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.279106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.282943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.291949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.292607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.292631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.412 [2024-07-22 19:43:07.292643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.412 [2024-07-22 19:43:07.292884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.412 [2024-07-22 19:43:07.293123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.412 [2024-07-22 19:43:07.293136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.412 [2024-07-22 19:43:07.293145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.412 [2024-07-22 19:43:07.296962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.412 [2024-07-22 19:43:07.306209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.412 [2024-07-22 19:43:07.306847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.412 [2024-07-22 19:43:07.306870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.413 [2024-07-22 19:43:07.306881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.413 [2024-07-22 19:43:07.307121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.413 [2024-07-22 19:43:07.307366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.413 [2024-07-22 19:43:07.307383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.413 [2024-07-22 19:43:07.307393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.413 [2024-07-22 19:43:07.311196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.413 [2024-07-22 19:43:07.320412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.413 [2024-07-22 19:43:07.321020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.413 [2024-07-22 19:43:07.321043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.413 [2024-07-22 19:43:07.321053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.413 [2024-07-22 19:43:07.321299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.413 [2024-07-22 19:43:07.321541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.413 [2024-07-22 19:43:07.321553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.413 [2024-07-22 19:43:07.321562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.413 [2024-07-22 19:43:07.325364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.413 [2024-07-22 19:43:07.334568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.413 [2024-07-22 19:43:07.335310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.413 [2024-07-22 19:43:07.335356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.413 [2024-07-22 19:43:07.335371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.413 [2024-07-22 19:43:07.335644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.413 [2024-07-22 19:43:07.335889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.413 [2024-07-22 19:43:07.335903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.413 [2024-07-22 19:43:07.335914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.413 [2024-07-22 19:43:07.339731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.413 [2024-07-22 19:43:07.348738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.413 [2024-07-22 19:43:07.349477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.413 [2024-07-22 19:43:07.349523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.413 [2024-07-22 19:43:07.349544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.413 [2024-07-22 19:43:07.349816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.413 [2024-07-22 19:43:07.350061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.413 [2024-07-22 19:43:07.350074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.413 [2024-07-22 19:43:07.350085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.413 [2024-07-22 19:43:07.353922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.675 [2024-07-22 19:43:07.362948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.675 [2024-07-22 19:43:07.363734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.675 [2024-07-22 19:43:07.363780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.675 [2024-07-22 19:43:07.363797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.675 [2024-07-22 19:43:07.364070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.675 [2024-07-22 19:43:07.364322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.675 [2024-07-22 19:43:07.364336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.675 [2024-07-22 19:43:07.364347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.675 [2024-07-22 19:43:07.368167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.675 [2024-07-22 19:43:07.377176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.675 [2024-07-22 19:43:07.377814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.675 [2024-07-22 19:43:07.377861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.675 [2024-07-22 19:43:07.377876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.675 [2024-07-22 19:43:07.378149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.675 [2024-07-22 19:43:07.378404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.675 [2024-07-22 19:43:07.378418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.675 [2024-07-22 19:43:07.378429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.675 [2024-07-22 19:43:07.382260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.675 [2024-07-22 19:43:07.391482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.675 [2024-07-22 19:43:07.392152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.675 [2024-07-22 19:43:07.392177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.675 [2024-07-22 19:43:07.392189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.675 [2024-07-22 19:43:07.392438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.675 [2024-07-22 19:43:07.392680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.675 [2024-07-22 19:43:07.392693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.675 [2024-07-22 19:43:07.392702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.675 [2024-07-22 19:43:07.396509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.675 [2024-07-22 19:43:07.405722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.675 [2024-07-22 19:43:07.406414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.406460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.406480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.406753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.406999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.407012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.407025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.410853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.419858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.420492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.420538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.420553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.420826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.421072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.421086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.421097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.424919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.434139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.434691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.434716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.434728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.434969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.435215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.435228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.435238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.439043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.448259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.448902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.448925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.448936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.449176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.449421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.449438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.449448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.453252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.462465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.463120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.463144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.463155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.463403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.463643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.463655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.463664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.467476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.476691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.477356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.477402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.477418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.477692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.477937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.477950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.477961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.481786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.490783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.491551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.491598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.491613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.491885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.492130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.492143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.492154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.495971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.504982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.505716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.505769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.505785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.506058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.506314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.506328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.506338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.510155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.519155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.519716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.519742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.519753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.519995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.520242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.520255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.520265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.524067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.533276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.533894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.676 [2024-07-22 19:43:07.533917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.676 [2024-07-22 19:43:07.533928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.676 [2024-07-22 19:43:07.534168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.676 [2024-07-22 19:43:07.534414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.676 [2024-07-22 19:43:07.534427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.676 [2024-07-22 19:43:07.534437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.676 [2024-07-22 19:43:07.538240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.676 [2024-07-22 19:43:07.547457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.676 [2024-07-22 19:43:07.548081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.677 [2024-07-22 19:43:07.548103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.677 [2024-07-22 19:43:07.548118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.677 [2024-07-22 19:43:07.548364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.677 [2024-07-22 19:43:07.548604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.677 [2024-07-22 19:43:07.548617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.677 [2024-07-22 19:43:07.548626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.677 [2024-07-22 19:43:07.552440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.677 [2024-07-22 19:43:07.561662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.677 [2024-07-22 19:43:07.562193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.677 [2024-07-22 19:43:07.562222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.677 [2024-07-22 19:43:07.562233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.677 [2024-07-22 19:43:07.562473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.677 [2024-07-22 19:43:07.562712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.677 [2024-07-22 19:43:07.562724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.677 [2024-07-22 19:43:07.562734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.677 [2024-07-22 19:43:07.566544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.677 [2024-07-22 19:43:07.575759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.677 [2024-07-22 19:43:07.576471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.677 [2024-07-22 19:43:07.576518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.677 [2024-07-22 19:43:07.576533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.677 [2024-07-22 19:43:07.576805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.677 [2024-07-22 19:43:07.577051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.677 [2024-07-22 19:43:07.577064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.677 [2024-07-22 19:43:07.577075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.677 [2024-07-22 19:43:07.580916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.677 [2024-07-22 19:43:07.589922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.677 [2024-07-22 19:43:07.590558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.677 [2024-07-22 19:43:07.590583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.677 [2024-07-22 19:43:07.590595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.677 [2024-07-22 19:43:07.590836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.677 [2024-07-22 19:43:07.591076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.677 [2024-07-22 19:43:07.591093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.677 [2024-07-22 19:43:07.591103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.677 [2024-07-22 19:43:07.594911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.677 [2024-07-22 19:43:07.604122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.677 [2024-07-22 19:43:07.604778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.677 [2024-07-22 19:43:07.604801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.677 [2024-07-22 19:43:07.604812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.677 [2024-07-22 19:43:07.605052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.677 [2024-07-22 19:43:07.605298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.677 [2024-07-22 19:43:07.605311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.677 [2024-07-22 19:43:07.605321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.677 [2024-07-22 19:43:07.609130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.677 [2024-07-22 19:43:07.618352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.677 [2024-07-22 19:43:07.618857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.677 [2024-07-22 19:43:07.618882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.677 [2024-07-22 19:43:07.618893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.677 [2024-07-22 19:43:07.619134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.677 [2024-07-22 19:43:07.619381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.677 [2024-07-22 19:43:07.619394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.677 [2024-07-22 19:43:07.619404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.677 [2024-07-22 19:43:07.623216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.941 [2024-07-22 19:43:07.632658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.941 [2024-07-22 19:43:07.633260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.941 [2024-07-22 19:43:07.633282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.941 [2024-07-22 19:43:07.633293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.941 [2024-07-22 19:43:07.633533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.941 [2024-07-22 19:43:07.633772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.941 [2024-07-22 19:43:07.633784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.941 [2024-07-22 19:43:07.633794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.941 [2024-07-22 19:43:07.637608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.941 [2024-07-22 19:43:07.646827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.941 [2024-07-22 19:43:07.647539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.941 [2024-07-22 19:43:07.647562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.941 [2024-07-22 19:43:07.647573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.941 [2024-07-22 19:43:07.647812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.941 [2024-07-22 19:43:07.648052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.941 [2024-07-22 19:43:07.648064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.941 [2024-07-22 19:43:07.648074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.941 [2024-07-22 19:43:07.651878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.941 [2024-07-22 19:43:07.661093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.941 [2024-07-22 19:43:07.661711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.941 [2024-07-22 19:43:07.661733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.941 [2024-07-22 19:43:07.661744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.941 [2024-07-22 19:43:07.661984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.941 [2024-07-22 19:43:07.662230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.941 [2024-07-22 19:43:07.662243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.941 [2024-07-22 19:43:07.662252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.941 [2024-07-22 19:43:07.666054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.941 [2024-07-22 19:43:07.675265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.941 [2024-07-22 19:43:07.675751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.941 [2024-07-22 19:43:07.675774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.941 [2024-07-22 19:43:07.675785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.941 [2024-07-22 19:43:07.676025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.941 [2024-07-22 19:43:07.676271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.941 [2024-07-22 19:43:07.676283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.941 [2024-07-22 19:43:07.676293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.941 [2024-07-22 19:43:07.680107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.941 [2024-07-22 19:43:07.689543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.941 [2024-07-22 19:43:07.690193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.941 [2024-07-22 19:43:07.690221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.941 [2024-07-22 19:43:07.690235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.941 [2024-07-22 19:43:07.690475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.941 [2024-07-22 19:43:07.690714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.941 [2024-07-22 19:43:07.690726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.941 [2024-07-22 19:43:07.690736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.941 [2024-07-22 19:43:07.694541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.941 [2024-07-22 19:43:07.703752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.941 [2024-07-22 19:43:07.704299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.941 [2024-07-22 19:43:07.704321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.941 [2024-07-22 19:43:07.704332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.941 [2024-07-22 19:43:07.704579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.704819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.704831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.704841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.708651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.717866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.718501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.718524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.718535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.942 [2024-07-22 19:43:07.718774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.719013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.719025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.719035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.722848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.732167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.732704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.732727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.732737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.942 [2024-07-22 19:43:07.732977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.733223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.733238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.733248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.737049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.746260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.746896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.746918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.746928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.942 [2024-07-22 19:43:07.747167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.747412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.747425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.747434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.751241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.760458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.761067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.761091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.761102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.942 [2024-07-22 19:43:07.761349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.761589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.761601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.761610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.765420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.774634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.775279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.775302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.775313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.942 [2024-07-22 19:43:07.775552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.775792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.775805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.775817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.779641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.788857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.789424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.789447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.789458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.942 [2024-07-22 19:43:07.789697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.789937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.789949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.789958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.793801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.803017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.803473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.803497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.803508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.942 [2024-07-22 19:43:07.803748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.803988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.803999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.804009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.807821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.817272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.817883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.817905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.817916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.942 [2024-07-22 19:43:07.818156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.818402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.818415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.818425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.822234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.831456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.832056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.832079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.832096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.942 [2024-07-22 19:43:07.832342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.942 [2024-07-22 19:43:07.832583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.942 [2024-07-22 19:43:07.832595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.942 [2024-07-22 19:43:07.832604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.942 [2024-07-22 19:43:07.836418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.942 [2024-07-22 19:43:07.845633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.942 [2024-07-22 19:43:07.846117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.942 [2024-07-22 19:43:07.846141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.942 [2024-07-22 19:43:07.846152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.943 [2024-07-22 19:43:07.846399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.943 [2024-07-22 19:43:07.846639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.943 [2024-07-22 19:43:07.846650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.943 [2024-07-22 19:43:07.846660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.943 [2024-07-22 19:43:07.850471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.943 [2024-07-22 19:43:07.859916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.943 [2024-07-22 19:43:07.860542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.943 [2024-07-22 19:43:07.860564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.943 [2024-07-22 19:43:07.860575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.943 [2024-07-22 19:43:07.860814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.943 [2024-07-22 19:43:07.861054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.943 [2024-07-22 19:43:07.861066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.943 [2024-07-22 19:43:07.861076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.943 [2024-07-22 19:43:07.864887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.943 [2024-07-22 19:43:07.874097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.943 [2024-07-22 19:43:07.874708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.943 [2024-07-22 19:43:07.874731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.943 [2024-07-22 19:43:07.874742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.943 [2024-07-22 19:43:07.874981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.943 [2024-07-22 19:43:07.875229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.943 [2024-07-22 19:43:07.875242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.943 [2024-07-22 19:43:07.875252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:48.943 [2024-07-22 19:43:07.879055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:48.943 [2024-07-22 19:43:07.888291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:48.943 [2024-07-22 19:43:07.888765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.943 [2024-07-22 19:43:07.888788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:48.943 [2024-07-22 19:43:07.888799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:48.943 [2024-07-22 19:43:07.889039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:48.943 [2024-07-22 19:43:07.889286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:48.943 [2024-07-22 19:43:07.889299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:48.943 [2024-07-22 19:43:07.889309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:07.893114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:07.902555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.206 [2024-07-22 19:43:07.903169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.206 [2024-07-22 19:43:07.903192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.206 [2024-07-22 19:43:07.903209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.206 [2024-07-22 19:43:07.903449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.206 [2024-07-22 19:43:07.903689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.206 [2024-07-22 19:43:07.903707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.206 [2024-07-22 19:43:07.903716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:07.907526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:07.916737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.206 [2024-07-22 19:43:07.917345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.206 [2024-07-22 19:43:07.917368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.206 [2024-07-22 19:43:07.917378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.206 [2024-07-22 19:43:07.917617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.206 [2024-07-22 19:43:07.917856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.206 [2024-07-22 19:43:07.917868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.206 [2024-07-22 19:43:07.917878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:07.921725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:07.930933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.206 [2024-07-22 19:43:07.931551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.206 [2024-07-22 19:43:07.931573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.206 [2024-07-22 19:43:07.931583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.206 [2024-07-22 19:43:07.931823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.206 [2024-07-22 19:43:07.932062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.206 [2024-07-22 19:43:07.932074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.206 [2024-07-22 19:43:07.932083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:07.935894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:07.945103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.206 [2024-07-22 19:43:07.945758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.206 [2024-07-22 19:43:07.945780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.206 [2024-07-22 19:43:07.945791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.206 [2024-07-22 19:43:07.946030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.206 [2024-07-22 19:43:07.946276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.206 [2024-07-22 19:43:07.946288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.206 [2024-07-22 19:43:07.946298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:07.950103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:07.959314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.206 [2024-07-22 19:43:07.959970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.206 [2024-07-22 19:43:07.959992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.206 [2024-07-22 19:43:07.960003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.206 [2024-07-22 19:43:07.960249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.206 [2024-07-22 19:43:07.960489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.206 [2024-07-22 19:43:07.960500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.206 [2024-07-22 19:43:07.960510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:07.964321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:07.973529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.206 [2024-07-22 19:43:07.974040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.206 [2024-07-22 19:43:07.974062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.206 [2024-07-22 19:43:07.974076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.206 [2024-07-22 19:43:07.974322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.206 [2024-07-22 19:43:07.974562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.206 [2024-07-22 19:43:07.974574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.206 [2024-07-22 19:43:07.974583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:07.978393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:07.987619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.206 [2024-07-22 19:43:07.988265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.206 [2024-07-22 19:43:07.988288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.206 [2024-07-22 19:43:07.988298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.206 [2024-07-22 19:43:07.988538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.206 [2024-07-22 19:43:07.988778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.206 [2024-07-22 19:43:07.988790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.206 [2024-07-22 19:43:07.988800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:07.992612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:08.001827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.206 [2024-07-22 19:43:08.002404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.206 [2024-07-22 19:43:08.002427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.206 [2024-07-22 19:43:08.002438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.206 [2024-07-22 19:43:08.002677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.206 [2024-07-22 19:43:08.002917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.206 [2024-07-22 19:43:08.002929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.206 [2024-07-22 19:43:08.002939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:08.006743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:08.015952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.206 [2024-07-22 19:43:08.016675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.206 [2024-07-22 19:43:08.016697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.206 [2024-07-22 19:43:08.016708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.206 [2024-07-22 19:43:08.016948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.206 [2024-07-22 19:43:08.017191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.206 [2024-07-22 19:43:08.017209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.206 [2024-07-22 19:43:08.017219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.206 [2024-07-22 19:43:08.021021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.206 [2024-07-22 19:43:08.030239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.207 [2024-07-22 19:43:08.030885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.207 [2024-07-22 19:43:08.030907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.207 [2024-07-22 19:43:08.030918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.207 [2024-07-22 19:43:08.031156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.207 [2024-07-22 19:43:08.031403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.207 [2024-07-22 19:43:08.031416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.207 [2024-07-22 19:43:08.031425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.207 [2024-07-22 19:43:08.035230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.207 [2024-07-22 19:43:08.044435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.207 [2024-07-22 19:43:08.044957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.207 [2024-07-22 19:43:08.044980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.207 [2024-07-22 19:43:08.044990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.207 [2024-07-22 19:43:08.045236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.207 [2024-07-22 19:43:08.045476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.207 [2024-07-22 19:43:08.045488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.207 [2024-07-22 19:43:08.045497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.207 [2024-07-22 19:43:08.049305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.207 [2024-07-22 19:43:08.058516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.207 [2024-07-22 19:43:08.059150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.207 [2024-07-22 19:43:08.059172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.207 [2024-07-22 19:43:08.059183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.207 [2024-07-22 19:43:08.059428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.207 [2024-07-22 19:43:08.059668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.207 [2024-07-22 19:43:08.059679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.207 [2024-07-22 19:43:08.059689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.207 [2024-07-22 19:43:08.063499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.207 [2024-07-22 19:43:08.072714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.207 [2024-07-22 19:43:08.073361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.207 [2024-07-22 19:43:08.073384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.207 [2024-07-22 19:43:08.073395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.207 [2024-07-22 19:43:08.073635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.207 [2024-07-22 19:43:08.073874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.207 [2024-07-22 19:43:08.073886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.207 [2024-07-22 19:43:08.073895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.207 [2024-07-22 19:43:08.077704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.207 [2024-07-22 19:43:08.086923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.207 [2024-07-22 19:43:08.087518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.207 [2024-07-22 19:43:08.087540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.207 [2024-07-22 19:43:08.087551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.207 [2024-07-22 19:43:08.087789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.207 [2024-07-22 19:43:08.088029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.207 [2024-07-22 19:43:08.088041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.207 [2024-07-22 19:43:08.088050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.207 [2024-07-22 19:43:08.091865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.207 [2024-07-22 19:43:08.101079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.207 [2024-07-22 19:43:08.101765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.207 [2024-07-22 19:43:08.101811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.207 [2024-07-22 19:43:08.101826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.207 [2024-07-22 19:43:08.102099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.207 [2024-07-22 19:43:08.102354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.207 [2024-07-22 19:43:08.102368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.207 [2024-07-22 19:43:08.102380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.207 [2024-07-22 19:43:08.106198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.207 [2024-07-22 19:43:08.115204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.207 [2024-07-22 19:43:08.115789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.207 [2024-07-22 19:43:08.115839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.207 [2024-07-22 19:43:08.115855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.207 [2024-07-22 19:43:08.116128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.207 [2024-07-22 19:43:08.116383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.207 [2024-07-22 19:43:08.116398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.207 [2024-07-22 19:43:08.116409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.207 [2024-07-22 19:43:08.120224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.207 [2024-07-22 19:43:08.129445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.207 [2024-07-22 19:43:08.130073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.207 [2024-07-22 19:43:08.130119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.207 [2024-07-22 19:43:08.130134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.207 [2024-07-22 19:43:08.130416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.207 [2024-07-22 19:43:08.130662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.207 [2024-07-22 19:43:08.130675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.207 [2024-07-22 19:43:08.130686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.207 [2024-07-22 19:43:08.134495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.207 [2024-07-22 19:43:08.143712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.207 [2024-07-22 19:43:08.144489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.207 [2024-07-22 19:43:08.144536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.207 [2024-07-22 19:43:08.144551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.207 [2024-07-22 19:43:08.144823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.207 [2024-07-22 19:43:08.145068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.207 [2024-07-22 19:43:08.145081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.207 [2024-07-22 19:43:08.145091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.207 [2024-07-22 19:43:08.148909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.471 [2024-07-22 19:43:08.157897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.471 [2024-07-22 19:43:08.158606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.471 [2024-07-22 19:43:08.158652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.471 [2024-07-22 19:43:08.158667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.471 [2024-07-22 19:43:08.158940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.471 [2024-07-22 19:43:08.159189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.471 [2024-07-22 19:43:08.159212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.471 [2024-07-22 19:43:08.159224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.471 [2024-07-22 19:43:08.163029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.471 [2024-07-22 19:43:08.172016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.471 [2024-07-22 19:43:08.172769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.471 [2024-07-22 19:43:08.172815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.471 [2024-07-22 19:43:08.172831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.471 [2024-07-22 19:43:08.173103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.471 [2024-07-22 19:43:08.173358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.471 [2024-07-22 19:43:08.173373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.471 [2024-07-22 19:43:08.173383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.471 [2024-07-22 19:43:08.177191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.471 [2024-07-22 19:43:08.186210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.471 [2024-07-22 19:43:08.186966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.471 [2024-07-22 19:43:08.187012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.471 [2024-07-22 19:43:08.187027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.471 [2024-07-22 19:43:08.187309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.471 [2024-07-22 19:43:08.187555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.471 [2024-07-22 19:43:08.187568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.471 [2024-07-22 19:43:08.187579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.471 [2024-07-22 19:43:08.191393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.471 [2024-07-22 19:43:08.200384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.471 [2024-07-22 19:43:08.200999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.471 [2024-07-22 19:43:08.201023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.471 [2024-07-22 19:43:08.201035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.471 [2024-07-22 19:43:08.201282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.471 [2024-07-22 19:43:08.201522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.471 [2024-07-22 19:43:08.201534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.471 [2024-07-22 19:43:08.201545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.471 [2024-07-22 19:43:08.205350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.471 [2024-07-22 19:43:08.214546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.471 [2024-07-22 19:43:08.215215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.471 [2024-07-22 19:43:08.215239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.471 [2024-07-22 19:43:08.215250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.471 [2024-07-22 19:43:08.215491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.471 [2024-07-22 19:43:08.215731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.471 [2024-07-22 19:43:08.215743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.471 [2024-07-22 19:43:08.215752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.471 [2024-07-22 19:43:08.219551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.471 [2024-07-22 19:43:08.228754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.471 [2024-07-22 19:43:08.229495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.471 [2024-07-22 19:43:08.229541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.471 [2024-07-22 19:43:08.229556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.471 [2024-07-22 19:43:08.229829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.471 [2024-07-22 19:43:08.230073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.471 [2024-07-22 19:43:08.230086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.471 [2024-07-22 19:43:08.230097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.471 [2024-07-22 19:43:08.233921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.471 [2024-07-22 19:43:08.242922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.471 [2024-07-22 19:43:08.243566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.471 [2024-07-22 19:43:08.243590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.471 [2024-07-22 19:43:08.243602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.471 [2024-07-22 19:43:08.243843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.471 [2024-07-22 19:43:08.244083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.471 [2024-07-22 19:43:08.244095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.471 [2024-07-22 19:43:08.244105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.471 [2024-07-22 19:43:08.247908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.471 [2024-07-22 19:43:08.257095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.471 [2024-07-22 19:43:08.257772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.471 [2024-07-22 19:43:08.257799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.471 [2024-07-22 19:43:08.257811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.471 [2024-07-22 19:43:08.258051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.471 [2024-07-22 19:43:08.258295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.471 [2024-07-22 19:43:08.258308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.471 [2024-07-22 19:43:08.258317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.471 [2024-07-22 19:43:08.262117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.471 [2024-07-22 19:43:08.271312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.471 [2024-07-22 19:43:08.271956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.471 [2024-07-22 19:43:08.271978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.271990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.272235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.272475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.272487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.272497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.276295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.285506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.286150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.472 [2024-07-22 19:43:08.286173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.286184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.286428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.286668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.286680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.286689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.290486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.299681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.300307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.472 [2024-07-22 19:43:08.300353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.300370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.300643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.300892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.300906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.300917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.304737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.313951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.314675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.472 [2024-07-22 19:43:08.314721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.314736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.315009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.315264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.315278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.315289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.319095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.328087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.328816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.472 [2024-07-22 19:43:08.328862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.328877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.329150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.329401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.329415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.329426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.333246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.342247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.342860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.472 [2024-07-22 19:43:08.342906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.342921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.343194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.343452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.343466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.343481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.347297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.356520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.357177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.472 [2024-07-22 19:43:08.357207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.357220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.357461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.357701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.357713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.357723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.361526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.370726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.371440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.472 [2024-07-22 19:43:08.371486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.371503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.371776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.372022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.372035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.372046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.375864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.384875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.385590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.472 [2024-07-22 19:43:08.385636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.385651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.385925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.386170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.386184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.386195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.390009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.398997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.399722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.472 [2024-07-22 19:43:08.399772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.472 [2024-07-22 19:43:08.399788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.472 [2024-07-22 19:43:08.400061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.472 [2024-07-22 19:43:08.400314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.472 [2024-07-22 19:43:08.400328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.472 [2024-07-22 19:43:08.400339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.472 [2024-07-22 19:43:08.404146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.472 [2024-07-22 19:43:08.413130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.472 [2024-07-22 19:43:08.413892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.473 [2024-07-22 19:43:08.413938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.473 [2024-07-22 19:43:08.413953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.473 [2024-07-22 19:43:08.414236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.473 [2024-07-22 19:43:08.414481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.473 [2024-07-22 19:43:08.414494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.473 [2024-07-22 19:43:08.414506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.473 [2024-07-22 19:43:08.418319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.735 [2024-07-22 19:43:08.427313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.735 [2024-07-22 19:43:08.427977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.735 [2024-07-22 19:43:08.428002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.735 [2024-07-22 19:43:08.428014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.735 [2024-07-22 19:43:08.428260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.736 [2024-07-22 19:43:08.428501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.736 [2024-07-22 19:43:08.428514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.736 [2024-07-22 19:43:08.428523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.736 [2024-07-22 19:43:08.432367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.736 [2024-07-22 19:43:08.441573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.736 [2024-07-22 19:43:08.442213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.736 [2024-07-22 19:43:08.442237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.736 [2024-07-22 19:43:08.442248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.736 [2024-07-22 19:43:08.442495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.736 [2024-07-22 19:43:08.442735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.736 [2024-07-22 19:43:08.442747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.736 [2024-07-22 19:43:08.442757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.736 [2024-07-22 19:43:08.446556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.736 [2024-07-22 19:43:08.455751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.736 [2024-07-22 19:43:08.456488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.736 [2024-07-22 19:43:08.456534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.736 [2024-07-22 19:43:08.456549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.736 [2024-07-22 19:43:08.456822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.736 [2024-07-22 19:43:08.457066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.736 [2024-07-22 19:43:08.457080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.736 [2024-07-22 19:43:08.457090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.736 [2024-07-22 19:43:08.460904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.736 [2024-07-22 19:43:08.469893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.736 [2024-07-22 19:43:08.470650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.736 [2024-07-22 19:43:08.470696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.736 [2024-07-22 19:43:08.470712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.736 [2024-07-22 19:43:08.470985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.736 [2024-07-22 19:43:08.471237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.736 [2024-07-22 19:43:08.471251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.736 [2024-07-22 19:43:08.471262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.736 [2024-07-22 19:43:08.475071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.736 [2024-07-22 19:43:08.484066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.736 [2024-07-22 19:43:08.484677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.736 [2024-07-22 19:43:08.484701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.736 [2024-07-22 19:43:08.484713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.736 [2024-07-22 19:43:08.484953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.736 [2024-07-22 19:43:08.485193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.736 [2024-07-22 19:43:08.485211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.736 [2024-07-22 19:43:08.485226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.736 [2024-07-22 19:43:08.489028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.736 [2024-07-22 19:43:08.498239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.736 [2024-07-22 19:43:08.498981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.736 [2024-07-22 19:43:08.499027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.736 [2024-07-22 19:43:08.499043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.736 [2024-07-22 19:43:08.499324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.736 [2024-07-22 19:43:08.499570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.736 [2024-07-22 19:43:08.499583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.736 [2024-07-22 19:43:08.499594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.736 [2024-07-22 19:43:08.503403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.736 [2024-07-22 19:43:08.512391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.736 [2024-07-22 19:43:08.512920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.736 [2024-07-22 19:43:08.512947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.736 [2024-07-22 19:43:08.512964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.736 [2024-07-22 19:43:08.513214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.736 [2024-07-22 19:43:08.513456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.736 [2024-07-22 19:43:08.513468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.736 [2024-07-22 19:43:08.513478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.737 [2024-07-22 19:43:08.517281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.737 [2024-07-22 19:43:08.526494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.737 [2024-07-22 19:43:08.527142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.737 [2024-07-22 19:43:08.527188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.737 [2024-07-22 19:43:08.527211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.737 [2024-07-22 19:43:08.527484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.737 [2024-07-22 19:43:08.527729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.737 [2024-07-22 19:43:08.527743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.737 [2024-07-22 19:43:08.527753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.737 [2024-07-22 19:43:08.531568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.737 [2024-07-22 19:43:08.540796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.737 [2024-07-22 19:43:08.541539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.737 [2024-07-22 19:43:08.541585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.737 [2024-07-22 19:43:08.541599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.737 [2024-07-22 19:43:08.541872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.737 [2024-07-22 19:43:08.542117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.737 [2024-07-22 19:43:08.542130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.737 [2024-07-22 19:43:08.542141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.737 [2024-07-22 19:43:08.545966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.737 [2024-07-22 19:43:08.554954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.737 [2024-07-22 19:43:08.555721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.737 [2024-07-22 19:43:08.555767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.737 [2024-07-22 19:43:08.555782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.737 [2024-07-22 19:43:08.556055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.737 [2024-07-22 19:43:08.556309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.737 [2024-07-22 19:43:08.556323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.737 [2024-07-22 19:43:08.556334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.737 [2024-07-22 19:43:08.560141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.737 [2024-07-22 19:43:08.569138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.737 [2024-07-22 19:43:08.569892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.737 [2024-07-22 19:43:08.569938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.737 [2024-07-22 19:43:08.569953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.737 [2024-07-22 19:43:08.570235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.737 [2024-07-22 19:43:08.570480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.737 [2024-07-22 19:43:08.570493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.737 [2024-07-22 19:43:08.570504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.737 [2024-07-22 19:43:08.574319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.737 [2024-07-22 19:43:08.583316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.737 [2024-07-22 19:43:08.584040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.737 [2024-07-22 19:43:08.584086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.737 [2024-07-22 19:43:08.584101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.737 [2024-07-22 19:43:08.584386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.737 [2024-07-22 19:43:08.584632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.737 [2024-07-22 19:43:08.584646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.737 [2024-07-22 19:43:08.584656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.737 [2024-07-22 19:43:08.588473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.737 [2024-07-22 19:43:08.597464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.737 [2024-07-22 19:43:08.598240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.737 [2024-07-22 19:43:08.598286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.737 [2024-07-22 19:43:08.598303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.737 [2024-07-22 19:43:08.598578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.737 [2024-07-22 19:43:08.598823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.737 [2024-07-22 19:43:08.598836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.737 [2024-07-22 19:43:08.598848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.737 [2024-07-22 19:43:08.602669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.737 [2024-07-22 19:43:08.611654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.737 [2024-07-22 19:43:08.612337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.737 [2024-07-22 19:43:08.612383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.737 [2024-07-22 19:43:08.612400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.737 [2024-07-22 19:43:08.612676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.737 [2024-07-22 19:43:08.612920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.737 [2024-07-22 19:43:08.612933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.738 [2024-07-22 19:43:08.612944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.738 [2024-07-22 19:43:08.616764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.738 [2024-07-22 19:43:08.625755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.738 [2024-07-22 19:43:08.626510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.738 [2024-07-22 19:43:08.626556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.738 [2024-07-22 19:43:08.626571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.738 [2024-07-22 19:43:08.626843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.738 [2024-07-22 19:43:08.627088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.738 [2024-07-22 19:43:08.627101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.738 [2024-07-22 19:43:08.627117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.738 [2024-07-22 19:43:08.630935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.738 [2024-07-22 19:43:08.639923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.738 [2024-07-22 19:43:08.640647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.738 [2024-07-22 19:43:08.640694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.738 [2024-07-22 19:43:08.640709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.738 [2024-07-22 19:43:08.640982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.738 [2024-07-22 19:43:08.641237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.738 [2024-07-22 19:43:08.641251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.738 [2024-07-22 19:43:08.641262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.738 [2024-07-22 19:43:08.645067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.738 [2024-07-22 19:43:08.654064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.738 [2024-07-22 19:43:08.654779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.738 [2024-07-22 19:43:08.654827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.738 [2024-07-22 19:43:08.654842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.738 [2024-07-22 19:43:08.655115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.738 [2024-07-22 19:43:08.655370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.738 [2024-07-22 19:43:08.655385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.738 [2024-07-22 19:43:08.655396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.738 [2024-07-22 19:43:08.659205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.738 [2024-07-22 19:43:08.668190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.738 [2024-07-22 19:43:08.668818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.738 [2024-07-22 19:43:08.668843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.738 [2024-07-22 19:43:08.668854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.738 [2024-07-22 19:43:08.669095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.738 [2024-07-22 19:43:08.669340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.738 [2024-07-22 19:43:08.669354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.738 [2024-07-22 19:43:08.669364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.738 [2024-07-22 19:43:08.673159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:49.738 [2024-07-22 19:43:08.682366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:49.738 [2024-07-22 19:43:08.682996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.738 [2024-07-22 19:43:08.683018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:49.738 [2024-07-22 19:43:08.683029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:49.738 [2024-07-22 19:43:08.683274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:49.738 [2024-07-22 19:43:08.683514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:49.738 [2024-07-22 19:43:08.683527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:49.738 [2024-07-22 19:43:08.683536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:49.738 [2024-07-22 19:43:08.687339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.001 [2024-07-22 19:43:08.696539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.001 [2024-07-22 19:43:08.697148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.697170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.697181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.697426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.697666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.002 [2024-07-22 19:43:08.697677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.002 [2024-07-22 19:43:08.697687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.002 [2024-07-22 19:43:08.701484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.002 [2024-07-22 19:43:08.710682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.002 [2024-07-22 19:43:08.711404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.711451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.711468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.711741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.711997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.002 [2024-07-22 19:43:08.712011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.002 [2024-07-22 19:43:08.712022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.002 [2024-07-22 19:43:08.715839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.002 [2024-07-22 19:43:08.724824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.002 [2024-07-22 19:43:08.725537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.725583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.725598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.725875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.726120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.002 [2024-07-22 19:43:08.726134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.002 [2024-07-22 19:43:08.726145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.002 [2024-07-22 19:43:08.729959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.002 [2024-07-22 19:43:08.738942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.002 [2024-07-22 19:43:08.739665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.739711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.739726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.739998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.740254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.002 [2024-07-22 19:43:08.740268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.002 [2024-07-22 19:43:08.740280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.002 [2024-07-22 19:43:08.744090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.002 [2024-07-22 19:43:08.753087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.002 [2024-07-22 19:43:08.753809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.753856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.753871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.754143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.754396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.002 [2024-07-22 19:43:08.754410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.002 [2024-07-22 19:43:08.754421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.002 [2024-07-22 19:43:08.758431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.002 [2024-07-22 19:43:08.767306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.002 [2024-07-22 19:43:08.768039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.768085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.768100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.768382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.768628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.002 [2024-07-22 19:43:08.768641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.002 [2024-07-22 19:43:08.768656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.002 [2024-07-22 19:43:08.772466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.002 [2024-07-22 19:43:08.781463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.002 [2024-07-22 19:43:08.782229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.782275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.782292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.782565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.782809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.002 [2024-07-22 19:43:08.782822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.002 [2024-07-22 19:43:08.782833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.002 [2024-07-22 19:43:08.786651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.002 [2024-07-22 19:43:08.795639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.002 [2024-07-22 19:43:08.796399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.796445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.796460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.796733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.796978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.002 [2024-07-22 19:43:08.796991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.002 [2024-07-22 19:43:08.797002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.002 [2024-07-22 19:43:08.800822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.002 [2024-07-22 19:43:08.809810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.002 [2024-07-22 19:43:08.810541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.810587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.810603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.810875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.811120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.002 [2024-07-22 19:43:08.811133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.002 [2024-07-22 19:43:08.811144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.002 [2024-07-22 19:43:08.814962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.002 [2024-07-22 19:43:08.824027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.002 [2024-07-22 19:43:08.824723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.002 [2024-07-22 19:43:08.824748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.002 [2024-07-22 19:43:08.824760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.002 [2024-07-22 19:43:08.825001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.002 [2024-07-22 19:43:08.825246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.825259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.825269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.003 [2024-07-22 19:43:08.829066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.003 [2024-07-22 19:43:08.838272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.003 [2024-07-22 19:43:08.839005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.003 [2024-07-22 19:43:08.839051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.003 [2024-07-22 19:43:08.839066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.003 [2024-07-22 19:43:08.839349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.003 [2024-07-22 19:43:08.839595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.839609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.839620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.003 [2024-07-22 19:43:08.843434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.003 [2024-07-22 19:43:08.852423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.003 [2024-07-22 19:43:08.853136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.003 [2024-07-22 19:43:08.853181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.003 [2024-07-22 19:43:08.853196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.003 [2024-07-22 19:43:08.853478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.003 [2024-07-22 19:43:08.853723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.853737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.853748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.003 [2024-07-22 19:43:08.857568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.003 [2024-07-22 19:43:08.866556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.003 [2024-07-22 19:43:08.867284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.003 [2024-07-22 19:43:08.867330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.003 [2024-07-22 19:43:08.867346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.003 [2024-07-22 19:43:08.867623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.003 [2024-07-22 19:43:08.867868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.867881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.867891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.003 [2024-07-22 19:43:08.871713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.003 [2024-07-22 19:43:08.880734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.003 [2024-07-22 19:43:08.881361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.003 [2024-07-22 19:43:08.881386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.003 [2024-07-22 19:43:08.881398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.003 [2024-07-22 19:43:08.881639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.003 [2024-07-22 19:43:08.881880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.881892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.881901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.003 [2024-07-22 19:43:08.885713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.003 [2024-07-22 19:43:08.894916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.003 [2024-07-22 19:43:08.895684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.003 [2024-07-22 19:43:08.895731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.003 [2024-07-22 19:43:08.895747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.003 [2024-07-22 19:43:08.896020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.003 [2024-07-22 19:43:08.896275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.896289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.896300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.003 [2024-07-22 19:43:08.900105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.003 [2024-07-22 19:43:08.909095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.003 [2024-07-22 19:43:08.909855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.003 [2024-07-22 19:43:08.909901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.003 [2024-07-22 19:43:08.909916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.003 [2024-07-22 19:43:08.910188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.003 [2024-07-22 19:43:08.910442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.910457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.910478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.003 [2024-07-22 19:43:08.914294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.003 [2024-07-22 19:43:08.923284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.003 [2024-07-22 19:43:08.924036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.003 [2024-07-22 19:43:08.924082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.003 [2024-07-22 19:43:08.924096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.003 [2024-07-22 19:43:08.924378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.003 [2024-07-22 19:43:08.924624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.924637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.924648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.003 [2024-07-22 19:43:08.928459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.003 [2024-07-22 19:43:08.937450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.003 [2024-07-22 19:43:08.938170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.003 [2024-07-22 19:43:08.938222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.003 [2024-07-22 19:43:08.938239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.003 [2024-07-22 19:43:08.938512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.003 [2024-07-22 19:43:08.938757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.938771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.938782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.003 [2024-07-22 19:43:08.942597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.003 [2024-07-22 19:43:08.951589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.003 [2024-07-22 19:43:08.952302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.003 [2024-07-22 19:43:08.952349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.003 [2024-07-22 19:43:08.952366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.003 [2024-07-22 19:43:08.952641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.003 [2024-07-22 19:43:08.952895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.003 [2024-07-22 19:43:08.952908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.003 [2024-07-22 19:43:08.952919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.266 [2024-07-22 19:43:08.956736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.266 [2024-07-22 19:43:08.965728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.266 [2024-07-22 19:43:08.966518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.266 [2024-07-22 19:43:08.966564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.266 [2024-07-22 19:43:08.966579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.266 [2024-07-22 19:43:08.966852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.266 [2024-07-22 19:43:08.967099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.266 [2024-07-22 19:43:08.967113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.266 [2024-07-22 19:43:08.967123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.266 [2024-07-22 19:43:08.970942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.266 [2024-07-22 19:43:08.979929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.266 [2024-07-22 19:43:08.980690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.266 [2024-07-22 19:43:08.980736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.266 [2024-07-22 19:43:08.980751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.266 [2024-07-22 19:43:08.981024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.266 [2024-07-22 19:43:08.981279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.266 [2024-07-22 19:43:08.981293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.266 [2024-07-22 19:43:08.981304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.266 [2024-07-22 19:43:08.985124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.266 [2024-07-22 19:43:08.994122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.266 [2024-07-22 19:43:08.994840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.266 [2024-07-22 19:43:08.994886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.266 [2024-07-22 19:43:08.994901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.266 [2024-07-22 19:43:08.995174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:08.995429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:08.995443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:08.995454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:08.999266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.008265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.009026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.009072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.009087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.009371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:09.009616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:09.009630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:09.009640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:09.013456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.022443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.023098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.023123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.023135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.023380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:09.023620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:09.023632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:09.023642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:09.027435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.036632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.037251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.037297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.037312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.037585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:09.037829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:09.037843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:09.037854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:09.041671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.050883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.051604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.051650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.051665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.051938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:09.052183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:09.052208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:09.052220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:09.056029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.065021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.065652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.065677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.065688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.065929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:09.066168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:09.066180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:09.066190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:09.069992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.079191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.079830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.079853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.079864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.080104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:09.080349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:09.080361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:09.080371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:09.084178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.093367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.093975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.093997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.094008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.094252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:09.094491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:09.094503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:09.094513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:09.098311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.107510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.108117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.108138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.108149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.108392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:09.108632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:09.108644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:09.108653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:09.112452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.121652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.122303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.122326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.122337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.122577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.267 [2024-07-22 19:43:09.122817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.267 [2024-07-22 19:43:09.122829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.267 [2024-07-22 19:43:09.122838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.267 [2024-07-22 19:43:09.126641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.267 [2024-07-22 19:43:09.135840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.267 [2024-07-22 19:43:09.136562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.267 [2024-07-22 19:43:09.136608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.267 [2024-07-22 19:43:09.136623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.267 [2024-07-22 19:43:09.136896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.268 [2024-07-22 19:43:09.137142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.268 [2024-07-22 19:43:09.137155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.268 [2024-07-22 19:43:09.137165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.268 [2024-07-22 19:43:09.140981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.268 [2024-07-22 19:43:09.149976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.268 [2024-07-22 19:43:09.150741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.268 [2024-07-22 19:43:09.150787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.268 [2024-07-22 19:43:09.150803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.268 [2024-07-22 19:43:09.151079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.268 [2024-07-22 19:43:09.151332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.268 [2024-07-22 19:43:09.151346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.268 [2024-07-22 19:43:09.151357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.268 [2024-07-22 19:43:09.155164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.268 [2024-07-22 19:43:09.164157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.268 [2024-07-22 19:43:09.164798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.268 [2024-07-22 19:43:09.164823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.268 [2024-07-22 19:43:09.164834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.268 [2024-07-22 19:43:09.165075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.268 [2024-07-22 19:43:09.165321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.268 [2024-07-22 19:43:09.165333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.268 [2024-07-22 19:43:09.165343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.268 [2024-07-22 19:43:09.169194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.268 [2024-07-22 19:43:09.178413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.268 [2024-07-22 19:43:09.179033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.268 [2024-07-22 19:43:09.179056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.268 [2024-07-22 19:43:09.179067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.268 [2024-07-22 19:43:09.179312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.268 [2024-07-22 19:43:09.179552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.268 [2024-07-22 19:43:09.179564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.268 [2024-07-22 19:43:09.179574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.268 [2024-07-22 19:43:09.183382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.268 [2024-07-22 19:43:09.192583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.268 [2024-07-22 19:43:09.193176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.268 [2024-07-22 19:43:09.193198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.268 [2024-07-22 19:43:09.193213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.268 [2024-07-22 19:43:09.193454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.268 [2024-07-22 19:43:09.193693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.268 [2024-07-22 19:43:09.193708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.268 [2024-07-22 19:43:09.193718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.268 [2024-07-22 19:43:09.197522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.268 [2024-07-22 19:43:09.206717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.268 [2024-07-22 19:43:09.207443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.268 [2024-07-22 19:43:09.207496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.268 [2024-07-22 19:43:09.207511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.268 [2024-07-22 19:43:09.207785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.268 [2024-07-22 19:43:09.208030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.268 [2024-07-22 19:43:09.208043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.268 [2024-07-22 19:43:09.208054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.268 [2024-07-22 19:43:09.211878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.530 [2024-07-22 19:43:09.220881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.530 [2024-07-22 19:43:09.221452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.530 [2024-07-22 19:43:09.221477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.530 [2024-07-22 19:43:09.221489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.530 [2024-07-22 19:43:09.221730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.530 [2024-07-22 19:43:09.221971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.530 [2024-07-22 19:43:09.221983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.530 [2024-07-22 19:43:09.221993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.530 [2024-07-22 19:43:09.225801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.530 [2024-07-22 19:43:09.235007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.530 [2024-07-22 19:43:09.235661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.530 [2024-07-22 19:43:09.235684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.530 [2024-07-22 19:43:09.235695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.530 [2024-07-22 19:43:09.235935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.530 [2024-07-22 19:43:09.236174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.530 [2024-07-22 19:43:09.236187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.530 [2024-07-22 19:43:09.236196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.530 [2024-07-22 19:43:09.240001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.530 [2024-07-22 19:43:09.249217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.530 [2024-07-22 19:43:09.249864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.530 [2024-07-22 19:43:09.249885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.530 [2024-07-22 19:43:09.249896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.530 [2024-07-22 19:43:09.250136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.530 [2024-07-22 19:43:09.250382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.530 [2024-07-22 19:43:09.250395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.530 [2024-07-22 19:43:09.250404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.530 [2024-07-22 19:43:09.254207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.530 [2024-07-22 19:43:09.263411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.530 [2024-07-22 19:43:09.264056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.530 [2024-07-22 19:43:09.264078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.530 [2024-07-22 19:43:09.264089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.530 [2024-07-22 19:43:09.264360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.530 [2024-07-22 19:43:09.264600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.530 [2024-07-22 19:43:09.264612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.530 [2024-07-22 19:43:09.264621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.530 [2024-07-22 19:43:09.268428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.530 [2024-07-22 19:43:09.277634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.530 [2024-07-22 19:43:09.278248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.530 [2024-07-22 19:43:09.278271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.530 [2024-07-22 19:43:09.278281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.530 [2024-07-22 19:43:09.278522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.278761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.278773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.278782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.282588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.291804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.292359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.292382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.292396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.531 [2024-07-22 19:43:09.292637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.292876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.292888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.292897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.296701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.305904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.306520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.306543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.306553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.531 [2024-07-22 19:43:09.306792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.307033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.307044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.307054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.310868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.320071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.320688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.320717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.320728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.531 [2024-07-22 19:43:09.320967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.321212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.321224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.321234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.325035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.334254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.334887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.334908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.334919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.531 [2024-07-22 19:43:09.335159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.335404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.335420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.335430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.339230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.348425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.349037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.349059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.349070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.531 [2024-07-22 19:43:09.349314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.349554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.349566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.349575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.353375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.362568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.363176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.363198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.363214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.531 [2024-07-22 19:43:09.363455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.363694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.363706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.363716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.367518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.376721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.377422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.377468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.377484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.531 [2024-07-22 19:43:09.377757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.378002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.378016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.378027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.381848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.390866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.391455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.391480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.391492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.531 [2024-07-22 19:43:09.391734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.391975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.391987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.391997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.395808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.405013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.405619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.405643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.405654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.531 [2024-07-22 19:43:09.405894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.531 [2024-07-22 19:43:09.406134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.531 [2024-07-22 19:43:09.406146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.531 [2024-07-22 19:43:09.406155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.531 [2024-07-22 19:43:09.409967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.531 [2024-07-22 19:43:09.419172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.531 [2024-07-22 19:43:09.419794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.531 [2024-07-22 19:43:09.419817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.531 [2024-07-22 19:43:09.419828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.532 [2024-07-22 19:43:09.420068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.532 [2024-07-22 19:43:09.420313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.532 [2024-07-22 19:43:09.420326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.532 [2024-07-22 19:43:09.420336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.532 [2024-07-22 19:43:09.424138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.532 [2024-07-22 19:43:09.433338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.532 [2024-07-22 19:43:09.433994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.532 [2024-07-22 19:43:09.434018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.532 [2024-07-22 19:43:09.434032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.532 [2024-07-22 19:43:09.434276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.532 [2024-07-22 19:43:09.434517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.532 [2024-07-22 19:43:09.434528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.532 [2024-07-22 19:43:09.434538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.532 [2024-07-22 19:43:09.438340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.532 [2024-07-22 19:43:09.447529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.532 [2024-07-22 19:43:09.448138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.532 [2024-07-22 19:43:09.448160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.532 [2024-07-22 19:43:09.448170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.532 [2024-07-22 19:43:09.448415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.532 [2024-07-22 19:43:09.448654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.532 [2024-07-22 19:43:09.448667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.532 [2024-07-22 19:43:09.448676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.532 [2024-07-22 19:43:09.452481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.532 [2024-07-22 19:43:09.461682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.532 [2024-07-22 19:43:09.462316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.532 [2024-07-22 19:43:09.462339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.532 [2024-07-22 19:43:09.462350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.532 [2024-07-22 19:43:09.462590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.532 [2024-07-22 19:43:09.462829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.532 [2024-07-22 19:43:09.462841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.532 [2024-07-22 19:43:09.462851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.532 [2024-07-22 19:43:09.466650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.532 [2024-07-22 19:43:09.475846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.532 [2024-07-22 19:43:09.476289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.532 [2024-07-22 19:43:09.476312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.532 [2024-07-22 19:43:09.476323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.532 [2024-07-22 19:43:09.476562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.532 [2024-07-22 19:43:09.476802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.532 [2024-07-22 19:43:09.476818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.532 [2024-07-22 19:43:09.476829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.532 [2024-07-22 19:43:09.480637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.795 [2024-07-22 19:43:09.490073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.795 [2024-07-22 19:43:09.490711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.795 [2024-07-22 19:43:09.490733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.795 [2024-07-22 19:43:09.490744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.795 [2024-07-22 19:43:09.490984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.795 [2024-07-22 19:43:09.491228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.795 [2024-07-22 19:43:09.491241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.795 [2024-07-22 19:43:09.491251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.795 [2024-07-22 19:43:09.495050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.795 [2024-07-22 19:43:09.504262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.795 [2024-07-22 19:43:09.505001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.795 [2024-07-22 19:43:09.505048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.795 [2024-07-22 19:43:09.505063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.795 [2024-07-22 19:43:09.505347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.795 [2024-07-22 19:43:09.505592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.795 [2024-07-22 19:43:09.505606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.795 [2024-07-22 19:43:09.505617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.795 [2024-07-22 19:43:09.509437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.795 [2024-07-22 19:43:09.518439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.795 [2024-07-22 19:43:09.518982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.795 [2024-07-22 19:43:09.519007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.795 [2024-07-22 19:43:09.519018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.795 [2024-07-22 19:43:09.519275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.795 [2024-07-22 19:43:09.519516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.795 [2024-07-22 19:43:09.519528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.795 [2024-07-22 19:43:09.519538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.795 [2024-07-22 19:43:09.523346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.795 [2024-07-22 19:43:09.532557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.795 [2024-07-22 19:43:09.533170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.795 [2024-07-22 19:43:09.533193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.795 [2024-07-22 19:43:09.533211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.795 [2024-07-22 19:43:09.533454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.795 [2024-07-22 19:43:09.533693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.795 [2024-07-22 19:43:09.533705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.795 [2024-07-22 19:43:09.533715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.795 [2024-07-22 19:43:09.537515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.795 [2024-07-22 19:43:09.546720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.795 [2024-07-22 19:43:09.547320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.795 [2024-07-22 19:43:09.547366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.795 [2024-07-22 19:43:09.547383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.795 [2024-07-22 19:43:09.547657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.795 [2024-07-22 19:43:09.547902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.795 [2024-07-22 19:43:09.547916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.795 [2024-07-22 19:43:09.547927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.795 [2024-07-22 19:43:09.551745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.795 [2024-07-22 19:43:09.560951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.795 [2024-07-22 19:43:09.561605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.795 [2024-07-22 19:43:09.561630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.795 [2024-07-22 19:43:09.561641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.795 [2024-07-22 19:43:09.561882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.795 [2024-07-22 19:43:09.562122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.795 [2024-07-22 19:43:09.562135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.795 [2024-07-22 19:43:09.562144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.795 [2024-07-22 19:43:09.565943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.795 [2024-07-22 19:43:09.575133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.795 [2024-07-22 19:43:09.575777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.795 [2024-07-22 19:43:09.575800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.795 [2024-07-22 19:43:09.575815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.795 [2024-07-22 19:43:09.576054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.795 [2024-07-22 19:43:09.576299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.795 [2024-07-22 19:43:09.576311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.795 [2024-07-22 19:43:09.576321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.795 [2024-07-22 19:43:09.580117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.795 [2024-07-22 19:43:09.589336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.795 [2024-07-22 19:43:09.589910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.795 [2024-07-22 19:43:09.589932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.795 [2024-07-22 19:43:09.589944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.795 [2024-07-22 19:43:09.590183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.795 [2024-07-22 19:43:09.590429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.795 [2024-07-22 19:43:09.590441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.795 [2024-07-22 19:43:09.590451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.795 [2024-07-22 19:43:09.594250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.795 [2024-07-22 19:43:09.603447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.795 [2024-07-22 19:43:09.604083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.795 [2024-07-22 19:43:09.604106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.795 [2024-07-22 19:43:09.604117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.795 [2024-07-22 19:43:09.604361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.795 [2024-07-22 19:43:09.604600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.796 [2024-07-22 19:43:09.604612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.796 [2024-07-22 19:43:09.604621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.796 [2024-07-22 19:43:09.608423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.796 [2024-07-22 19:43:09.617626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.796 [2024-07-22 19:43:09.618277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.796 [2024-07-22 19:43:09.618300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.796 [2024-07-22 19:43:09.618311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.796 [2024-07-22 19:43:09.618551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.796 [2024-07-22 19:43:09.618796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.796 [2024-07-22 19:43:09.618808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.796 [2024-07-22 19:43:09.618818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.796 [2024-07-22 19:43:09.622623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.796 [2024-07-22 19:43:09.631830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.796 [2024-07-22 19:43:09.632491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.796 [2024-07-22 19:43:09.632514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.796 [2024-07-22 19:43:09.632525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.796 [2024-07-22 19:43:09.632765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.796 [2024-07-22 19:43:09.633004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.796 [2024-07-22 19:43:09.633016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.796 [2024-07-22 19:43:09.633025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.796 [2024-07-22 19:43:09.636826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.796 [2024-07-22 19:43:09.646031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.796 [2024-07-22 19:43:09.646672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.796 [2024-07-22 19:43:09.646694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.796 [2024-07-22 19:43:09.646705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.796 [2024-07-22 19:43:09.646944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.796 [2024-07-22 19:43:09.647184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.796 [2024-07-22 19:43:09.647196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.796 [2024-07-22 19:43:09.647211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.796 [2024-07-22 19:43:09.651014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.796 [2024-07-22 19:43:09.660223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.796 [2024-07-22 19:43:09.660872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.796 [2024-07-22 19:43:09.660894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.796 [2024-07-22 19:43:09.660906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.796 [2024-07-22 19:43:09.661146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.796 [2024-07-22 19:43:09.661391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.796 [2024-07-22 19:43:09.661404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.796 [2024-07-22 19:43:09.661414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.796 [2024-07-22 19:43:09.665224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.796 [2024-07-22 19:43:09.674422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.796 [2024-07-22 19:43:09.675058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.796 [2024-07-22 19:43:09.675080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.796 [2024-07-22 19:43:09.675091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.796 [2024-07-22 19:43:09.675336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.796 [2024-07-22 19:43:09.675576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.796 [2024-07-22 19:43:09.675588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.796 [2024-07-22 19:43:09.675598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.796 [2024-07-22 19:43:09.679404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.796 [2024-07-22 19:43:09.688620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.796 [2024-07-22 19:43:09.689262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.796 [2024-07-22 19:43:09.689285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.796 [2024-07-22 19:43:09.689296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.796 [2024-07-22 19:43:09.689536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.796 [2024-07-22 19:43:09.689775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.796 [2024-07-22 19:43:09.689787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.796 [2024-07-22 19:43:09.689797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3177445 Killed "${NVMF_APP[@]}" "$@" 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:50.796 [2024-07-22 19:43:09.693605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3179303 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3179303 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3179303 ']' 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:50.796 19:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:50.796 [2024-07-22 19:43:09.702820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.796 [2024-07-22 19:43:09.703335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.796 [2024-07-22 19:43:09.703358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.796 [2024-07-22 19:43:09.703369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.796 [2024-07-22 19:43:09.703609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.796 [2024-07-22 19:43:09.703849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.796 [2024-07-22 19:43:09.703861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.796 [2024-07-22 19:43:09.703870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.796 [2024-07-22 19:43:09.707679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.796 [2024-07-22 19:43:09.717121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.796 [2024-07-22 19:43:09.717776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.796 [2024-07-22 19:43:09.717800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.796 [2024-07-22 19:43:09.717811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.796 [2024-07-22 19:43:09.718052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.796 [2024-07-22 19:43:09.718299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.796 [2024-07-22 19:43:09.718320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.796 [2024-07-22 19:43:09.718330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.797 [2024-07-22 19:43:09.722138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.797 [2024-07-22 19:43:09.731358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.797 [2024-07-22 19:43:09.732023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.797 [2024-07-22 19:43:09.732047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.797 [2024-07-22 19:43:09.732058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:50.797 [2024-07-22 19:43:09.732305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:50.797 [2024-07-22 19:43:09.732545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:50.797 [2024-07-22 19:43:09.732557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:50.797 [2024-07-22 19:43:09.732567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:50.797 [2024-07-22 19:43:09.736380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:50.797 [2024-07-22 19:43:09.745595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:50.797 [2024-07-22 19:43:09.746240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:50.797 [2024-07-22 19:43:09.746267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:50.797 [2024-07-22 19:43:09.746278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.746519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.746761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.746774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.746783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.750596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.759825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.760457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.760481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.760492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.760734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.760976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.760988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.760998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.764815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.774038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.774711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.774734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.774745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.774986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.775234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.775247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.775257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.779065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.781320] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:51.060 [2024-07-22 19:43:09.781416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:51.060 [2024-07-22 19:43:09.788317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.788972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.789000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.789012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.789259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.789503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.789514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.789524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.793462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.802470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.803097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.803119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.803130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.803377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.803619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.803631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.803641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.807459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.816686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.817176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.817198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.817216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.817457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.817698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.817711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.817721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.821536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.830774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.831421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.831445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.831456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.831699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.831946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.831958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.831967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.835784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.844893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.845555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.845578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.845590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.845831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.846072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.846084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.846094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 EAL: No free 2048 kB hugepages reported on node 1 00:38:51.060 [2024-07-22 19:43:09.849915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.859146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.859764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.859787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.859800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.860041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.860287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.860300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.860310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.864122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.873349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.873961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.873983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.873994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.874242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.874484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.874496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.874510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.878323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.887557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.888211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.888235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.888247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.888489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.060 [2024-07-22 19:43:09.888730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.060 [2024-07-22 19:43:09.888742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.060 [2024-07-22 19:43:09.888751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.060 [2024-07-22 19:43:09.892564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.060 [2024-07-22 19:43:09.901789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.060 [2024-07-22 19:43:09.902515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.060 [2024-07-22 19:43:09.902565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.060 [2024-07-22 19:43:09.902581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.060 [2024-07-22 19:43:09.902868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.061 [2024-07-22 19:43:09.903114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.061 [2024-07-22 19:43:09.903128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.061 [2024-07-22 19:43:09.903139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.061 [2024-07-22 19:43:09.906971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.061 [2024-07-22 19:43:09.913870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:51.061 [2024-07-22 19:43:09.915984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.061 [2024-07-22 19:43:09.916759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.061 [2024-07-22 19:43:09.916806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.061 [2024-07-22 19:43:09.916822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.061 [2024-07-22 19:43:09.917106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.061 [2024-07-22 19:43:09.917360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.061 [2024-07-22 19:43:09.917375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.061 [2024-07-22 19:43:09.917387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.061 [2024-07-22 19:43:09.921211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.061 [2024-07-22 19:43:09.930212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.061 [2024-07-22 19:43:09.930934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.061 [2024-07-22 19:43:09.930981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.061 [2024-07-22 19:43:09.930996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.061 [2024-07-22 19:43:09.931284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.061 [2024-07-22 19:43:09.931532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.061 [2024-07-22 19:43:09.931545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.061 [2024-07-22 19:43:09.931557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.061 [2024-07-22 19:43:09.935373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.061 [2024-07-22 19:43:09.944381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.061 [2024-07-22 19:43:09.945109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.061 [2024-07-22 19:43:09.945155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.061 [2024-07-22 19:43:09.945172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.061 [2024-07-22 19:43:09.945459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.061 [2024-07-22 19:43:09.945707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.061 [2024-07-22 19:43:09.945720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.061 [2024-07-22 19:43:09.945731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.061 [2024-07-22 19:43:09.949549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.061 [2024-07-22 19:43:09.958549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.061 [2024-07-22 19:43:09.959278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.061 [2024-07-22 19:43:09.959325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.061 [2024-07-22 19:43:09.959342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.061 [2024-07-22 19:43:09.959616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.061 [2024-07-22 19:43:09.959862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.061 [2024-07-22 19:43:09.959876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.061 [2024-07-22 19:43:09.959887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.061 [2024-07-22 19:43:09.963712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.061 [2024-07-22 19:43:09.972753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.061 [2024-07-22 19:43:09.973529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.061 [2024-07-22 19:43:09.973575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.061 [2024-07-22 19:43:09.973590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.061 [2024-07-22 19:43:09.973868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.061 [2024-07-22 19:43:09.974116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.061 [2024-07-22 19:43:09.974129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.061 [2024-07-22 19:43:09.974140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.061 [2024-07-22 19:43:09.977967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.061 [2024-07-22 19:43:09.986991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.061 [2024-07-22 19:43:09.987770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.061 [2024-07-22 19:43:09.987817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.061 [2024-07-22 19:43:09.987833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.061 [2024-07-22 19:43:09.988108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.061 [2024-07-22 19:43:09.988362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.061 [2024-07-22 19:43:09.988377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.061 [2024-07-22 19:43:09.988388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.061 [2024-07-22 19:43:09.992202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.061 [2024-07-22 19:43:10.001214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.061 [2024-07-22 19:43:10.002391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.061 [2024-07-22 19:43:10.002437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.061 [2024-07-22 19:43:10.002455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.061 [2024-07-22 19:43:10.002731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.061 [2024-07-22 19:43:10.002977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.061 [2024-07-22 19:43:10.002990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.061 [2024-07-22 19:43:10.003001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.061 [2024-07-22 19:43:10.006834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.323 [2024-07-22 19:43:10.015406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.323 [2024-07-22 19:43:10.016037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.323 [2024-07-22 19:43:10.016063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.323 [2024-07-22 19:43:10.016075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.323 [2024-07-22 19:43:10.016322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.323 [2024-07-22 19:43:10.016565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.323 [2024-07-22 19:43:10.016582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.323 [2024-07-22 19:43:10.016592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.324 [2024-07-22 19:43:10.020403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.324 [2024-07-22 19:43:10.029618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.324 [2024-07-22 19:43:10.030380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.324 [2024-07-22 19:43:10.030426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.324 [2024-07-22 19:43:10.030442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.324 [2024-07-22 19:43:10.030717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.324 [2024-07-22 19:43:10.030963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.324 [2024-07-22 19:43:10.030976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.324 [2024-07-22 19:43:10.030987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.324 [2024-07-22 19:43:10.034813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.324 [2024-07-22 19:43:10.043819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.324 [2024-07-22 19:43:10.044456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.324 [2024-07-22 19:43:10.044483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.324 [2024-07-22 19:43:10.044495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.324 [2024-07-22 19:43:10.044740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.324 [2024-07-22 19:43:10.044981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.324 [2024-07-22 19:43:10.044994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.324 [2024-07-22 19:43:10.045005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.324 [2024-07-22 19:43:10.048820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.324 [2024-07-22 19:43:10.058043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.324 [2024-07-22 19:43:10.058609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:51.324 [2024-07-22 19:43:10.058635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:51.324 [2024-07-22 19:43:10.058645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:51.324 [2024-07-22 19:43:10.058652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:51.324 [2024-07-22 19:43:10.058659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:51.324 [2024-07-22 19:43:10.058793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:51.324 [2024-07-22 19:43:10.058806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.324 [2024-07-22 19:43:10.058851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.324 [2024-07-22 19:43:10.058867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.324 [2024-07-22 19:43:10.058908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:51.324 [2024-07-22 19:43:10.058935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:51.324 [2024-07-22 19:43:10.059149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.324 [2024-07-22 19:43:10.059404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.324 [2024-07-22 19:43:10.059418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.324 [2024-07-22 19:43:10.059430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.324 [2024-07-22 19:43:10.063256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.324 [2024-07-22 19:43:10.072286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.324 [2024-07-22 19:43:10.072714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.324 [2024-07-22 19:43:10.072740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.324 [2024-07-22 19:43:10.072752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.324 [2024-07-22 19:43:10.072996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.324 [2024-07-22 19:43:10.073245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.324 [2024-07-22 19:43:10.073258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.324 [2024-07-22 19:43:10.073268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.324 [2024-07-22 19:43:10.077089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.324 [2024-07-22 19:43:10.086557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.324 [2024-07-22 19:43:10.087191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.324 [2024-07-22 19:43:10.087220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.324 [2024-07-22 19:43:10.087232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.324 [2024-07-22 19:43:10.087475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.324 [2024-07-22 19:43:10.087716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.324 [2024-07-22 19:43:10.087728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.324 [2024-07-22 19:43:10.087738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.324 [2024-07-22 19:43:10.091553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.324 [2024-07-22 19:43:10.100778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.324 [2024-07-22 19:43:10.101511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.324 [2024-07-22 19:43:10.101560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.324 [2024-07-22 19:43:10.101576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.324 [2024-07-22 19:43:10.101856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.324 [2024-07-22 19:43:10.102104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.324 [2024-07-22 19:43:10.102122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.324 [2024-07-22 19:43:10.102134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.324 [2024-07-22 19:43:10.105974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.324 [2024-07-22 19:43:10.114993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.324 [2024-07-22 19:43:10.115798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.324 [2024-07-22 19:43:10.115846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.324 [2024-07-22 19:43:10.115862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.324 [2024-07-22 19:43:10.116141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.324 [2024-07-22 19:43:10.116397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.324 [2024-07-22 19:43:10.116411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.324 [2024-07-22 19:43:10.116423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.324 [2024-07-22 19:43:10.120241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.324 [2024-07-22 19:43:10.129253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.324 [2024-07-22 19:43:10.129918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.324 [2024-07-22 19:43:10.129964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.324 [2024-07-22 19:43:10.129980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.324 [2024-07-22 19:43:10.130264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.324 [2024-07-22 19:43:10.130513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.324 [2024-07-22 19:43:10.130527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.324 [2024-07-22 19:43:10.130538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.324 [2024-07-22 19:43:10.134360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.324 [2024-07-22 19:43:10.143364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.324 [2024-07-22 19:43:10.144064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.324 [2024-07-22 19:43:10.144090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.324 [2024-07-22 19:43:10.144102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.324 [2024-07-22 19:43:10.144352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.144594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.144606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.144616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.325 [2024-07-22 19:43:10.148425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.325 [2024-07-22 19:43:10.157642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.325 [2024-07-22 19:43:10.158436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.325 [2024-07-22 19:43:10.158482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.325 [2024-07-22 19:43:10.158499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.325 [2024-07-22 19:43:10.158774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.159020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.159034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.159045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.325 [2024-07-22 19:43:10.162869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.325 [2024-07-22 19:43:10.171872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.325 [2024-07-22 19:43:10.172635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.325 [2024-07-22 19:43:10.172682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.325 [2024-07-22 19:43:10.172697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.325 [2024-07-22 19:43:10.172973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.173228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.173243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.173255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.325 [2024-07-22 19:43:10.177081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.325 [2024-07-22 19:43:10.186100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.325 [2024-07-22 19:43:10.186896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.325 [2024-07-22 19:43:10.186942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.325 [2024-07-22 19:43:10.186959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.325 [2024-07-22 19:43:10.187242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.187489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.187503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.187514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.325 [2024-07-22 19:43:10.191330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.325 [2024-07-22 19:43:10.200331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.325 [2024-07-22 19:43:10.201005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.325 [2024-07-22 19:43:10.201030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.325 [2024-07-22 19:43:10.201048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.325 [2024-07-22 19:43:10.201299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.201542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.201554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.201564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.325 [2024-07-22 19:43:10.205370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.325 [2024-07-22 19:43:10.214581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.325 [2024-07-22 19:43:10.215312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.325 [2024-07-22 19:43:10.215359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.325 [2024-07-22 19:43:10.215375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.325 [2024-07-22 19:43:10.215650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.215896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.215910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.215921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.325 [2024-07-22 19:43:10.219743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.325 [2024-07-22 19:43:10.228748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.325 [2024-07-22 19:43:10.229521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.325 [2024-07-22 19:43:10.229568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.325 [2024-07-22 19:43:10.229583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.325 [2024-07-22 19:43:10.229857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.230104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.230118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.230129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.325 [2024-07-22 19:43:10.233948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.325 [2024-07-22 19:43:10.242951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.325 [2024-07-22 19:43:10.243423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.325 [2024-07-22 19:43:10.243469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.325 [2024-07-22 19:43:10.243486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.325 [2024-07-22 19:43:10.243762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.244007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.244025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.244036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.325 [2024-07-22 19:43:10.247856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.325 [2024-07-22 19:43:10.257090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.325 [2024-07-22 19:43:10.257877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.325 [2024-07-22 19:43:10.257923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.325 [2024-07-22 19:43:10.257939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.325 [2024-07-22 19:43:10.258220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.258467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.258480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.258491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.325 [2024-07-22 19:43:10.262305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.325 [2024-07-22 19:43:10.271302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.325 [2024-07-22 19:43:10.272100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.325 [2024-07-22 19:43:10.272146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.325 [2024-07-22 19:43:10.272161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.325 [2024-07-22 19:43:10.272444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.325 [2024-07-22 19:43:10.272692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.325 [2024-07-22 19:43:10.272706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.325 [2024-07-22 19:43:10.272716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.588 [2024-07-22 19:43:10.276530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.588 [2024-07-22 19:43:10.285542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.588 [2024-07-22 19:43:10.286305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.588 [2024-07-22 19:43:10.286352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.588 [2024-07-22 19:43:10.286369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.588 [2024-07-22 19:43:10.286645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.588 [2024-07-22 19:43:10.286891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.588 [2024-07-22 19:43:10.286905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.588 [2024-07-22 19:43:10.286916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.588 [2024-07-22 19:43:10.290741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.588 [2024-07-22 19:43:10.299740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.588 [2024-07-22 19:43:10.300395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.588 [2024-07-22 19:43:10.300421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.588 [2024-07-22 19:43:10.300432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.588 [2024-07-22 19:43:10.300674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.588 [2024-07-22 19:43:10.300916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.588 [2024-07-22 19:43:10.300929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.588 [2024-07-22 19:43:10.300939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.588 [2024-07-22 19:43:10.304750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.588 [2024-07-22 19:43:10.313970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.588 [2024-07-22 19:43:10.314580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.588 [2024-07-22 19:43:10.314603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.588 [2024-07-22 19:43:10.314614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.588 [2024-07-22 19:43:10.314854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.588 [2024-07-22 19:43:10.315094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.588 [2024-07-22 19:43:10.315105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.588 [2024-07-22 19:43:10.315115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.588 [2024-07-22 19:43:10.318926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.588 [2024-07-22 19:43:10.328132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.588 [2024-07-22 19:43:10.328878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.588 [2024-07-22 19:43:10.328924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.588 [2024-07-22 19:43:10.328949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.588 [2024-07-22 19:43:10.329231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.588 [2024-07-22 19:43:10.329477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.588 [2024-07-22 19:43:10.329491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.588 [2024-07-22 19:43:10.329501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.588 [2024-07-22 19:43:10.333315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.588 [2024-07-22 19:43:10.342309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.588 [2024-07-22 19:43:10.342992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.588 [2024-07-22 19:43:10.343017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.588 [2024-07-22 19:43:10.343033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.588 [2024-07-22 19:43:10.343280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.588 [2024-07-22 19:43:10.343521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.588 [2024-07-22 19:43:10.343533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.588 [2024-07-22 19:43:10.343543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.588 [2024-07-22 19:43:10.347344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.588 [2024-07-22 19:43:10.356551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.588 [2024-07-22 19:43:10.357080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.588 [2024-07-22 19:43:10.357102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.588 [2024-07-22 19:43:10.357114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.588 [2024-07-22 19:43:10.357358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.588 [2024-07-22 19:43:10.357599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.588 [2024-07-22 19:43:10.357612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.588 [2024-07-22 19:43:10.357621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.588 [2024-07-22 19:43:10.361427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.588 [2024-07-22 19:43:10.370635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.588 [2024-07-22 19:43:10.371125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.588 [2024-07-22 19:43:10.371147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.588 [2024-07-22 19:43:10.371158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.588 [2024-07-22 19:43:10.371402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.588 [2024-07-22 19:43:10.371642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.588 [2024-07-22 19:43:10.371654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.588 [2024-07-22 19:43:10.371664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.588 [2024-07-22 19:43:10.375467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.588 [2024-07-22 19:43:10.384900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.588 [2024-07-22 19:43:10.385523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.588 [2024-07-22 19:43:10.385547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.588 [2024-07-22 19:43:10.385558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.588 [2024-07-22 19:43:10.385798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.588 [2024-07-22 19:43:10.386042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.588 [2024-07-22 19:43:10.386055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.588 [2024-07-22 19:43:10.386064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.588 [2024-07-22 19:43:10.389868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.399072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 [2024-07-22 19:43:10.399739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.399762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.399772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 [2024-07-22 19:43:10.400011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 [2024-07-22 19:43:10.400255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.589 [2024-07-22 19:43:10.400268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.589 [2024-07-22 19:43:10.400278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.589 [2024-07-22 19:43:10.404075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.413279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 [2024-07-22 19:43:10.414028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.414075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.414090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 [2024-07-22 19:43:10.414373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 [2024-07-22 19:43:10.414618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.589 [2024-07-22 19:43:10.414633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.589 [2024-07-22 19:43:10.414644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.589 [2024-07-22 19:43:10.418461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.427460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 [2024-07-22 19:43:10.428134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.428158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.428170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 [2024-07-22 19:43:10.428418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 [2024-07-22 19:43:10.428659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.589 [2024-07-22 19:43:10.428672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.589 [2024-07-22 19:43:10.428681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.589 [2024-07-22 19:43:10.432500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.441706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 [2024-07-22 19:43:10.442505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.442552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.442567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 [2024-07-22 19:43:10.442841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 [2024-07-22 19:43:10.443086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.589 [2024-07-22 19:43:10.443100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.589 [2024-07-22 19:43:10.443111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.589 [2024-07-22 19:43:10.446929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.455922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 [2024-07-22 19:43:10.456676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.456723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.456738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 [2024-07-22 19:43:10.457012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 [2024-07-22 19:43:10.457266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.589 [2024-07-22 19:43:10.457281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.589 [2024-07-22 19:43:10.457292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.589 [2024-07-22 19:43:10.461104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.470102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 [2024-07-22 19:43:10.470883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.470931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.470947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 [2024-07-22 19:43:10.471228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 [2024-07-22 19:43:10.471475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.589 [2024-07-22 19:43:10.471489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.589 [2024-07-22 19:43:10.471501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.589 [2024-07-22 19:43:10.475316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.484329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 [2024-07-22 19:43:10.485090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.485136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.485157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 [2024-07-22 19:43:10.485438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 [2024-07-22 19:43:10.485684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.589 [2024-07-22 19:43:10.485698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.589 [2024-07-22 19:43:10.485709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.589 [2024-07-22 19:43:10.489520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.498516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 [2024-07-22 19:43:10.499147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.499172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.499184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 [2024-07-22 19:43:10.499432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 [2024-07-22 19:43:10.499674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.589 [2024-07-22 19:43:10.499686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.589 [2024-07-22 19:43:10.499696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.589 [2024-07-22 19:43:10.503502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.512709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 [2024-07-22 19:43:10.513547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.513594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.513610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 [2024-07-22 19:43:10.513884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 [2024-07-22 19:43:10.514130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.589 [2024-07-22 19:43:10.514144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.589 [2024-07-22 19:43:10.514161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.589 [2024-07-22 19:43:10.517980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.589 [2024-07-22 19:43:10.526984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.589 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:51.589 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:38:51.589 [2024-07-22 19:43:10.527556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.589 [2024-07-22 19:43:10.527581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.589 [2024-07-22 19:43:10.527593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.589 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:51.589 [2024-07-22 19:43:10.527839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.589 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:51.589 [2024-07-22 19:43:10.528106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.590 [2024-07-22 19:43:10.528118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.590 [2024-07-22 19:43:10.528129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.590 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.590 [2024-07-22 19:43:10.531936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.852 [2024-07-22 19:43:10.541147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.852 [2024-07-22 19:43:10.541904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.852 [2024-07-22 19:43:10.541951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.852 [2024-07-22 19:43:10.541966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.852 [2024-07-22 19:43:10.542247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.852 [2024-07-22 19:43:10.542494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.852 [2024-07-22 19:43:10.542508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.852 [2024-07-22 19:43:10.542520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.852 [2024-07-22 19:43:10.546335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.852 [2024-07-22 19:43:10.555333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.852 [2024-07-22 19:43:10.555971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.852 [2024-07-22 19:43:10.555995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.852 [2024-07-22 19:43:10.556007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.852 [2024-07-22 19:43:10.556255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.852 [2024-07-22 19:43:10.556497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.852 [2024-07-22 19:43:10.556510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.852 [2024-07-22 19:43:10.556520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.852 [2024-07-22 19:43:10.560323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.852 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:51.852 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:51.852 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.852 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.852 [2024-07-22 19:43:10.569538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.852 [2024-07-22 19:43:10.570313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.852 [2024-07-22 19:43:10.570360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.852 [2024-07-22 19:43:10.570377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.852 [2024-07-22 19:43:10.570651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.852 [2024-07-22 19:43:10.570897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.852 [2024-07-22 19:43:10.570910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.852 [2024-07-22 19:43:10.570921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.852 [2024-07-22 19:43:10.574743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.852 [2024-07-22 19:43:10.575178] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:51.852 [2024-07-22 19:43:10.583739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.852 [2024-07-22 19:43:10.584483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.852 [2024-07-22 19:43:10.584529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.852 [2024-07-22 19:43:10.584545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.852 [2024-07-22 19:43:10.584819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.852 [2024-07-22 19:43:10.585066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.852 [2024-07-22 19:43:10.585081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.852 [2024-07-22 19:43:10.585092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.852 [2024-07-22 19:43:10.588919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.852 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.852 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:51.852 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.852 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.852 [2024-07-22 19:43:10.597920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.852 [2024-07-22 19:43:10.598704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.852 [2024-07-22 19:43:10.598751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.852 [2024-07-22 19:43:10.598767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.852 [2024-07-22 19:43:10.599040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.852 [2024-07-22 19:43:10.599295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.852 [2024-07-22 19:43:10.599309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.852 [2024-07-22 19:43:10.599321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.852 [2024-07-22 19:43:10.603134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.852 [2024-07-22 19:43:10.612140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.852 [2024-07-22 19:43:10.612947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.852 [2024-07-22 19:43:10.612994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.852 [2024-07-22 19:43:10.613009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.852 [2024-07-22 19:43:10.613293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.852 [2024-07-22 19:43:10.613540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.852 [2024-07-22 19:43:10.613553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.852 [2024-07-22 19:43:10.613565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.852 [2024-07-22 19:43:10.617380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.852 [2024-07-22 19:43:10.626378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.852 [2024-07-22 19:43:10.627114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.852 [2024-07-22 19:43:10.627160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.852 [2024-07-22 19:43:10.627177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.852 [2024-07-22 19:43:10.627462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.853 [2024-07-22 19:43:10.627708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.853 [2024-07-22 19:43:10.627722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.853 [2024-07-22 19:43:10.627733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.853 [2024-07-22 19:43:10.631552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.853 Malloc0 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.853 [2024-07-22 19:43:10.640554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.853 [2024-07-22 19:43:10.641225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.853 [2024-07-22 19:43:10.641251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.853 [2024-07-22 19:43:10.641263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.853 [2024-07-22 19:43:10.641504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.853 [2024-07-22 19:43:10.641745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.853 [2024-07-22 19:43:10.641758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.853 [2024-07-22 19:43:10.641768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.853 [2024-07-22 19:43:10.645579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.853 [2024-07-22 19:43:10.654793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.853 [2024-07-22 19:43:10.655536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.853 [2024-07-22 19:43:10.655583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:38:51.853 [2024-07-22 19:43:10.655598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:38:51.853 [2024-07-22 19:43:10.655871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:38:51.853 [2024-07-22 19:43:10.656117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:51.853 [2024-07-22 19:43:10.656131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:51.853 [2024-07-22 19:43:10.656142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:51.853 [2024-07-22 19:43:10.659963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.853 [2024-07-22 19:43:10.668964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:51.853 [2024-07-22 19:43:10.669402] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.853 19:43:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3178054 00:38:51.853 [2024-07-22 19:43:10.799328] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:01.851 00:39:01.851 Latency(us) 00:39:01.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:01.851 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:01.851 Verification LBA range: start 0x0 length 0x4000 00:39:01.851 Nvme1n1 : 15.00 7429.32 29.02 9266.98 0.00 7638.75 860.16 23920.64 00:39:01.851 =================================================================================================================== 00:39:01.851 Total : 7429.32 29.02 9266.98 0.00 7638.75 860.16 23920.64 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:01.851 rmmod nvme_tcp 00:39:01.851 rmmod nvme_fabrics 00:39:01.851 rmmod nvme_keyring 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3179303 ']' 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3179303 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3179303 ']' 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3179303 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3179303 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3179303' 00:39:01.851 killing process with pid 3179303 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3179303 00:39:01.851 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3179303 00:39:02.113 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:02.113 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:02.113 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:02.113 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:02.113 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:02.113 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.113 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:02.113 19:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:04.729 00:39:04.729 real 0m29.895s 00:39:04.729 user 1m10.119s 00:39:04.729 sys 0m7.251s 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:04.729 ************************************ 00:39:04.729 END TEST nvmf_bdevperf 00:39:04.729 ************************************ 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.729 ************************************ 00:39:04.729 START TEST nvmf_target_disconnect 00:39:04.729 ************************************ 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:04.729 * Looking for test storage... 00:39:04.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:39:04.729 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:39:04.730 19:43:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:11.319 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:11.319 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:11.319 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:11.319 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:11.319 19:43:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:11.319 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:11.319 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:11.319 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:11.319 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.319 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.319 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.319 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:11.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:39:11.319 00:39:11.319 --- 10.0.0.2 ping statistics --- 00:39:11.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.319 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:39:11.319 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:39:11.320 00:39:11.320 --- 10.0.0.1 ping statistics --- 00:39:11.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.320 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:11.320 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:11.580 ************************************ 00:39:11.580 START TEST nvmf_target_disconnect_tc1 00:39:11.580 ************************************ 00:39:11.580 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:39:11.580 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:11.580 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:39:11.580 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:11.580 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:11.580 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:11.581 EAL: No free 2048 kB hugepages reported on node 1 00:39:11.581 [2024-07-22 19:43:30.471529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:11.581 [2024-07-22 19:43:30.471632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:39:11.581 [2024-07-22 19:43:30.471705] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:11.581 [2024-07-22 19:43:30.471725] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:11.581 [2024-07-22 19:43:30.471744] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:39:11.581 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:39:11.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:39:11.581 Initializing NVMe Controllers 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:11.581 00:39:11.581 real 0m0.216s 00:39:11.581 user 0m0.092s 00:39:11.581 sys 0m0.123s 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:11.581 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:11.581 ************************************ 00:39:11.581 END TEST nvmf_target_disconnect_tc1 00:39:11.581 ************************************ 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:11.842 ************************************ 00:39:11.842 START TEST nvmf_target_disconnect_tc2 00:39:11.842 ************************************ 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3185427 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3185427 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3185427 ']' 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:11.842 19:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:11.842 [2024-07-22 19:43:30.681864] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:11.842 [2024-07-22 19:43:30.681985] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.842 EAL: No free 2048 kB hugepages reported on node 1 00:39:12.103 [2024-07-22 19:43:30.836956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:12.364 [2024-07-22 19:43:31.075584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:12.364 [2024-07-22 19:43:31.075643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:12.364 [2024-07-22 19:43:31.075658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:12.364 [2024-07-22 19:43:31.075669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:12.364 [2024-07-22 19:43:31.075682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:12.364 [2024-07-22 19:43:31.075906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:39:12.364 [2024-07-22 19:43:31.076030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:39:12.364 [2024-07-22 19:43:31.076135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:39:12.364 [2024-07-22 19:43:31.076164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:12.625 Malloc0 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:12.625 [2024-07-22 19:43:31.540773] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.625 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:12.886 [2024-07-22 19:43:31.581099] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.886 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.886 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:12.886 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.886 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:12.886 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.886 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3185741 00:39:12.886 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:39:12.886 19:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:12.886 EAL: No free 2048 kB hugepages reported on node 1 00:39:14.808 19:43:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3185427 00:39:14.808 19:43:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Write completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Write completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Write completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Write completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Write completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Write completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Write completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Write completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 Read completed with error (sct=0, sc=8) 00:39:14.808 starting I/O failed 00:39:14.808 [2024-07-22 19:43:33.623142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:14.808 [2024-07-22 19:43:33.623696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.623734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.624122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.624137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.624609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.624644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.625018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.625032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.625610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.625645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.625955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.625969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.626449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.626484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.626866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.626880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.627398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.627433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.627660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.627674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.628006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.628018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.628407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.628418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.628815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.628827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.629197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.629213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.629527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.629540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.808 qpair failed and we were unable to recover it. 00:39:14.808 [2024-07-22 19:43:33.629919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.808 [2024-07-22 19:43:33.629933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.630325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.630338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.630760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.630771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.631124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.631135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.631473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.631485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.631871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.631883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.632191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.632204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.632645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.632656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.633034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.633045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.633511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.633547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.633920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.633934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.634195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.634214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.634579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.634590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.635022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.635032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.635500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.635536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.635925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.635939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.636424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.636459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.636839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.636852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.637058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.637070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.637397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.637409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.637753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.637765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.638117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.638127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.638433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.638443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.638794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.638804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.639140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.639150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.639504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.639516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.639897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.639907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.640285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.640296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.640636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.640647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.641021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.641031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.641354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.641364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.641706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.641717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.642045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.642056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.642448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.642459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.642747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.642757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.643064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.643075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.643499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.643510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.809 qpair failed and we were unable to recover it. 00:39:14.809 [2024-07-22 19:43:33.643893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.809 [2024-07-22 19:43:33.643904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.644234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.644245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.644598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.644609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.644893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.644905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.645207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.645218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.645612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.645622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.646007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.646018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.646196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.646214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.646550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.646562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.646935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.646945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.647417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.647451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.647831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.647851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.648230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.648242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.648590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.648600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.648948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.648960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.649312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.649323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.649716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.649727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.650010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.650021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.650392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.650404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.650721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.650732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.651098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.651108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.651478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.651489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.651806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.651816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.652181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.652191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.652526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.652536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.652869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.652879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.653255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.653266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.653581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.653592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.653821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.653831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.654019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.654029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.654398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.654410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.654735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.654747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.655078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.655089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.655414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.655426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.655730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.655741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.656105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.656116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.656469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.656480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.656849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.810 [2024-07-22 19:43:33.656860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.810 qpair failed and we were unable to recover it. 00:39:14.810 [2024-07-22 19:43:33.657243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.657254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.657606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.657616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.657981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.657992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.658357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.658368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.658719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.658731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.659051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.659064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.659407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.659418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.659747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.659757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.660146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.660157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.660521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.660533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.660894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.660904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.661227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.661239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.661586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.661596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.661959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.661969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.662278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.662290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.662686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.662697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.662937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.662948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.663325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.663336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.663692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.663702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.664078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.664088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.664351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.664362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.664718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.664728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.665140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.665150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.665524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.665535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.665887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.665901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.666242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.666253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.666611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.666622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.666966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.666977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.667324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.667335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.667667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.667677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.668057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.668068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.668433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.668444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.668836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.668846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.669048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.669061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.669508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.669519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.669946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.669957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.670332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.811 [2024-07-22 19:43:33.670342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.811 qpair failed and we were unable to recover it. 00:39:14.811 [2024-07-22 19:43:33.670540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.670557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.670869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.670880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.671251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.671263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.671617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.671628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.672010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.672020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.672395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.672406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.672750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.672761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.672952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.672963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.673332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.673345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.673702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.673713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.673906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.673916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.674261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.674272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.674660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.674671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.674916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.674926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.675298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.675309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.675670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.675680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.676096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.676106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.676448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.676460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.676823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.676833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.677177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.677188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.677559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.677569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.677983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.677995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.678344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.678354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.678707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.678718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.679059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.679070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.679408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.679418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.679775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.679785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.680126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.680137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.680473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.680484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.680857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.680868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.681240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.812 [2024-07-22 19:43:33.681251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.812 qpair failed and we were unable to recover it. 00:39:14.812 [2024-07-22 19:43:33.681623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.681635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.682005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.682015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.682366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.682377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.682777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.682788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.683194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.683215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.683574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.683584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.683937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.683948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.684319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.684331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.684706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.684717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.685076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.685087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.685429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.685440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.685793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.685805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.686176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.686187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.686544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.686556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.686912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.686923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.687273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.687284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.687670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.687680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.688035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.688045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.688411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.688422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.688792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.688802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.689152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.689162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.689504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.689515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.689727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.689738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.690092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.690103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.690369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.690380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.690734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.690744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.691093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.691104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.691460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.691472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.691851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.691861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.692214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.692225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.692553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.692563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.692941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.692951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.693327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.693337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.693705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.693734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.694074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.694085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.694346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.694357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.813 [2024-07-22 19:43:33.694721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.813 [2024-07-22 19:43:33.694732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.813 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.695111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.695122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.695400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.695410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.695755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.695766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.696138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.696148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.696537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.696550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.696789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.696799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.697166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.697176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.697518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.697532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.697743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.697754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.698117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.698128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.698486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.698497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.698852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.698863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.699210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.699222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.699546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.699557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.699917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.699927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.700305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.700316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.700672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.700683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.701044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.701054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.701483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.701494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.701849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.701860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.702218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.702230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.702603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.702614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.702968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.702978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.703344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.703356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.703715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.703726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.704078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.704089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.704439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.704451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.704830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.704841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.705184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.705194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.705547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.705558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.705928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.705939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.706292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.706303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.706784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.706803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.707050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.707061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.707469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.707480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.707917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.707928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.814 qpair failed and we were unable to recover it. 00:39:14.814 [2024-07-22 19:43:33.708275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.814 [2024-07-22 19:43:33.708286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.708642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.708652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.709000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.709011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.709389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.709400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.709744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.709755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.710107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.710118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.710473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.710485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.710703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.710715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.711068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.711078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.711428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.711440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.711816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.711827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.712178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.712193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.712580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.712591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.712952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.712964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.713319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.713330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.713583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.713594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.713790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.713801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.714139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.714150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.714512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.714523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.714879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.714890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.715276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.715288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.715671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.715681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.715876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.715887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.716262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.716274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.716535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.716548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.716743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.716754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.717061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.717072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.717414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.717425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.717801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.717811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.718164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.718176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.718534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.718544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.718896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.718907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.719257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.719269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.719658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.719668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.720021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.720032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.815 [2024-07-22 19:43:33.720428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.815 [2024-07-22 19:43:33.720438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.815 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.720811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.720821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.721171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.721182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.721561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.721573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.721945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.721956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.722304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.722316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.722672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.722682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.723101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.723111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.723446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.723458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.723672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.723684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.724062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.724073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.724426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.724437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.724810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.724820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.725197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.725211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.725555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.725566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.725925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.725936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.726282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.726295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.726655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.726666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.727029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.727040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.727417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.727428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.727689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.727700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.728048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.728058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.728321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.728332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.728712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.728723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.729091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.729102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.729459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.729470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.729845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.729857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.730245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.730257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.730641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.730653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.731057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.731067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.731420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.731431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.731805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.731816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.732125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.732136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.732492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.732503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.732869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.732880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.733232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.733243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.733580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.733591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.733970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.816 [2024-07-22 19:43:33.733980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.816 qpair failed and we were unable to recover it. 00:39:14.816 [2024-07-22 19:43:33.734357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.734368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.734731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.734741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.735114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.735125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.735479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.735491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.735847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.735858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.736237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.736247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.736558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.736570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.736914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.736924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.737294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.737306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.737672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.737682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.738081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.738092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.738469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.738480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.738832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.738842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.739198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.739213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.739601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.739612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.739983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.739998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.740335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.740345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.740704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.740714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.741056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.741068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.741423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.741434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.741808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.741818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.742165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.742177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.742432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.742444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.742790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.742801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.743104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.743115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.743500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.743510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.743886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.743896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.744242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.744252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.744623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.744634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.744895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.744904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.745226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.817 [2024-07-22 19:43:33.745237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.817 qpair failed and we were unable to recover it. 00:39:14.817 [2024-07-22 19:43:33.745601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.745611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.745984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.745995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.746340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.746351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.746717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.746727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.747100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.747111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.747459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.747470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.747813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.747824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.748176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.748187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.748380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.748393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.748757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.748769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.749103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.749115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.749481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.749493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.749846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.749857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.750239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.750250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.750467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.750479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.750872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.750883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.751223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.751234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.751619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.751631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.751828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.751838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.752179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.752189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.752534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.752545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.752900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.752911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:14.818 [2024-07-22 19:43:33.753249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:14.818 [2024-07-22 19:43:33.753260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:14.818 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.753634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.753646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.754033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.754044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.754425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.754436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.754783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.754794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.755150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.755163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.755529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.755540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.755892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.755903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.756259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.756270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.756652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.756664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.757017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.757028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.757378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.757388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.757774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.757785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.758145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.758156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.758480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.758492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.758863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.758873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.759224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.759235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.759598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.759610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.759989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.759999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.760264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.760275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.760692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.760702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.761079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.761090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.761531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.761542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.761895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.761906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.762266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.762278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.090 [2024-07-22 19:43:33.762618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.090 [2024-07-22 19:43:33.762629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.090 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.762982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.762997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.763378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.763389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.763730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.763741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.764092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.764104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.764511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.764521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.764714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.764725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.765043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.765054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.765398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.765410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.765578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.765590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.765990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.766000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.766360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.766370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.766779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.766789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.766988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.766999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.767380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.767391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.767744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.767755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.768150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.768161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.768504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.768515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.768829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.768839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.769217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.769228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.769610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.769623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.769969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.769981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.770382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.770393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.770776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.770786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.771003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.771013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.771265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.771276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.771628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.771638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.771992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.772003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.772358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.772369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.772752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.772763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.773131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.773143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.773496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.773506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.773738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.773748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.774103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.774113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.774488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.774500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.774875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.774886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.091 [2024-07-22 19:43:33.775245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.091 [2024-07-22 19:43:33.775257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.091 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.775602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.775613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.776000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.776011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.776367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.776378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.776733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.776745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.777121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.777132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.777390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.777402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.777760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.777771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.778120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.778131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.778374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.778386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.778793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.778804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.779179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.779191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.779573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.779584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.779940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.779950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.780302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.780313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.780668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.780679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.781032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.781043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.781419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.781430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.781647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.781658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.782012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.782023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.782402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.782413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.782816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.782827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.783242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.783253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.783640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.783650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.784013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.784026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.784389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.784401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.784747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.784758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.785116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.785126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.785301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.785313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.785658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.785673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.786029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.786040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.786398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.786409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.786794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.786805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.787199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.787213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.787575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.787585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.787958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.787968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.788388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.788399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.788750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.092 [2024-07-22 19:43:33.788761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.092 qpair failed and we were unable to recover it. 00:39:15.092 [2024-07-22 19:43:33.789137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.789148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.789510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.789521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.789876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.789887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.790259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.790270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.790631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.790641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.790991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.791002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.791336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.791347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.791700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.791710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.792063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.792073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.792417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.792427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.792781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.792791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.793138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.793150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.793489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.793500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.793845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.793856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.794205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.794216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.794535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.794546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.794898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.794908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.795266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.795278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.795667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.795677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.796054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.796064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.796414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.796425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.796801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.796811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.797180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.797190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.797547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.797558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.797931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.797943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.798167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.798178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.798532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.798546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.798899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.798910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.799303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.799314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.799667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.799677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.800049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.800060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.800318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.800328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.800511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.800522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.800882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.800893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.801248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.801259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.801642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.801653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.802025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.802036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.093 qpair failed and we were unable to recover it. 00:39:15.093 [2024-07-22 19:43:33.802463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.093 [2024-07-22 19:43:33.802474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.802815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.802826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.803027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.803039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.803246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.803256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.803587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.803597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.803943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.803954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.804350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.804360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.804722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.804733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.805073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.805083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.805461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.805473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.805826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.805837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.806214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.806224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.806586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.806596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.806946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.806956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.807328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.807339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.807688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.807698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.808097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.808107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.808354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.808365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.808731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.808745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.809100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.809110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.809438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.809450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.809806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.809816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.810167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.810177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.810559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.810569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.810728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.810738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.811100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.811111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.811456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.811468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.811818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.811829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.812183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.812194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.812539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.812551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.812904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.812915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.813271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.813281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.813569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.813579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.094 [2024-07-22 19:43:33.813959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.094 [2024-07-22 19:43:33.813970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.094 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.814361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.814371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.814727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.814737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.814953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.814963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.815318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.815329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.815682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.815693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.815914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.815925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.816281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.816292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.816667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.816678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.817033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.817044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.817230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.817241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.817526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.817537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.817780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.817791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.818149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.818160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.818531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.818542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.818899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.818910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.819265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.819276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.819661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.819672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.820027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.820037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.820400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.820411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.820774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.820785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.821137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.821148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.821544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.821555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.821771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.821782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.822135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.822146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.822504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.822516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.822897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.822907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.823152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.823162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.823516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.823527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.823902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.823913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.824287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.824298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.824649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.824661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.825001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.825013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.825351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.825362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.095 [2024-07-22 19:43:33.825715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.095 [2024-07-22 19:43:33.825726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.095 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.825927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.825938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.826264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.826278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.826466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.826476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.826832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.826843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.827193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.827208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.827540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.827552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.827890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.827900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.828255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.828266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.828614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.828626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.828820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.828831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.829213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.829224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.829458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.829468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.829840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.829850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.830227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.830239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.830624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.830638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.831011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.831022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.831444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.831455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.831876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.831888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.832264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.832274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.832633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.832645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.832998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.833008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.833380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.833391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.833743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.833753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.834108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.834120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.834462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.834473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.834825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.834836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.835219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.835230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.835574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.835585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.835935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.835946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.836303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.836314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.836708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.836718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.837073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.837084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.837453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.837464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.837821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.837832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.838179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.838189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.838525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.838536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.096 qpair failed and we were unable to recover it. 00:39:15.096 [2024-07-22 19:43:33.838767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.096 [2024-07-22 19:43:33.838779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.839150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.839162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.839518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.839528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.839899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.839910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.840162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.840173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.840536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.840550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.840897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.840908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.841160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.841170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.841519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.841538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.841785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.841795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.841988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.841999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.842368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.842380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.842753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.842764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.843121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.843132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.843394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.843404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.843652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.843663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.843923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.843933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.844288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.844299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.844674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.844685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.845060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.845071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.845416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.845427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.845798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.845809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.846179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.846189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.846543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.846554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.846899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.846909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.847289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.847300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.847756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.847775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.848142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.848153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.848509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.848521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.848896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.848910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.849251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.849264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.849634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.849645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.850024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.850038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.850434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.850446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.850797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.850808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.851272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.851286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.851679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.851689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.097 [2024-07-22 19:43:33.852041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.097 [2024-07-22 19:43:33.852051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.097 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.852120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.852130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.852467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.852479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.852851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.852861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.853226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.853237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.853589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.853605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.853801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.853812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.854133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.854143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.854365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.854379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.854728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.854739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.855091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.855103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.855511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.855522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.855862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.855873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.856229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.856241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.856576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.856586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.856959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.856970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.857214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.857225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.857598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.857608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.857987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.857998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.858351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.858361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.858722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.858732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.859107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.859118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.859494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.859505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.859857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.859867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.860197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.860214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.860547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.860557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.860954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.860964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.861341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.861353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.861692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.861703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.862098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.862109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.862444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.862455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.862806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.862816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.863008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.863019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.863377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.863388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.863762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.863772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.864127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.864139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.864493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.864504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.864845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.098 [2024-07-22 19:43:33.864855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.098 qpair failed and we were unable to recover it. 00:39:15.098 [2024-07-22 19:43:33.865207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.865219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.865576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.865586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.865938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.865949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.866296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.866306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.866653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.866665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.867017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.867028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.867386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.867398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.867770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.867782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.868133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.868143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.868496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.868507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.868886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.868897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.869250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.869261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.869633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.869643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.870016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.870026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.870383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.870394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.870739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.870749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.871123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.871133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.871488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.871500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.871932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.871942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.872283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.872295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.872627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.872638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.872990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.873001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.873384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.873395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.873793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.873803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.874177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.874188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.874544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.874554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.874947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.874958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.099 qpair failed and we were unable to recover it. 00:39:15.099 [2024-07-22 19:43:33.875204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.099 [2024-07-22 19:43:33.875214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.875619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.875630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.875981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.875992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.876418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.876453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.876846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.876866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.877237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.877249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.877612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.877622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.877999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.878010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.878319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.878329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.878689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.878699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.879076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.879089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.879461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.879473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.879537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.879548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.879859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.879869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.880251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.880262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.880614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.880626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.881019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.881028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.881366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.881378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.881640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.881650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.881865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.881876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.882070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.882082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.882213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.882225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.882560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.882571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.882900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.882911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.883266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.883277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.883670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.883680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.884055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.884066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.884414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.884426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.884778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.884789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.885169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.885179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.885397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.885407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.885784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.885795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.886171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.886182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.886537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.886548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.886840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.886851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.887228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.887239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.887592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.887603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.100 [2024-07-22 19:43:33.887950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.100 [2024-07-22 19:43:33.887960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.100 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.888333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.888344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.888657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.888667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.889018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.889029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.889401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.889412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.889768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.889778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.890129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.890140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.890486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.890496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.890846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.890857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.891217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.891229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.891606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.891618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.891968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.891979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.892333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.892343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.892700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.892713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.893067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.893078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.893459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.893470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.893840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.893851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.894207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.894219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.894422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.894433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.894675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.894685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.895040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.895051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.895401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.895412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.895786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.895797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.896154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.896165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.896510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.896521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.896852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.896862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.897215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.897226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.897598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.897609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.897982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.897992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.898354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.898365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.898717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.898727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.899102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.899117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.899320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.899330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.899673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.899684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.900020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.900031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.900369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.900380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.900667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.900679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.901054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.101 [2024-07-22 19:43:33.901064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.101 qpair failed and we were unable to recover it. 00:39:15.101 [2024-07-22 19:43:33.901417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.901428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.901738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.901748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.902130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.902140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.902509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.902519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.902874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.902885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.903215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.903226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.903582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.903593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.904018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.904028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.904368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.904380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.904752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.904763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.905114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.905124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.905470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.905489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.905878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.905889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.906083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.906095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.906322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.906335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.906701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.906713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.907056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.907068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.907485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.907495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.907839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.907849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.908207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.908218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.908573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.908584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.908936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.908946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.909299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.909311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.909682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.909693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.910087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.910098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.910468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.910479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.910857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.910867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.911218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.911229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.911562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.911574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.911946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.911957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.912312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.912323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.912679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.912690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.913101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.913112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.913446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.913457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.913803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.913815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.914183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.914193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.914542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.914554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.102 [2024-07-22 19:43:33.914906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.102 [2024-07-22 19:43:33.914917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.102 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.915286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.915297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.915664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.915674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.916019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.916030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.916410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.916421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.916802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.916813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.917168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.917178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.917612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.917624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.917969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.917981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.918352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.918363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.918716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.918728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.919076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.919087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.919458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.919469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.919852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.919863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.920218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.920228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.920630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.920641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.920991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.921002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.921379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.921389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.921776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.921788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.922003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.922014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.922379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.922394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.922774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.922784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.922979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.922989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.923262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.923273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.923630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.923641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.923996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.924006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.924397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.924408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.924761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.924772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.925124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.925135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.925442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.925455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.925807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.925817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.926172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.926182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.926576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.926587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.926943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.926956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.927309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.927321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.927696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.927707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.928051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.928061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.928413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.103 [2024-07-22 19:43:33.928425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.103 qpair failed and we were unable to recover it. 00:39:15.103 [2024-07-22 19:43:33.928800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.928811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.929167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.929177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.929506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.929517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.929737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.929749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.930109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.930120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.930489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.930500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.930853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.930863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.931208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.931219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.931572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.931583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.931936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.931947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.932303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.932316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.932542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.932553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.932909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.932920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.933120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.933130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.933506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.933518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.933896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.933908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.934264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.934275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.934624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.934636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.934990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.935000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.935372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.935383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.935734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.935747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.936102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.936113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.936460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.936471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.936846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.936856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.937214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.937225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.937462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.937472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.937830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.937840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.938172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.938183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.938545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.938556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.938909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.938920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.939270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.104 [2024-07-22 19:43:33.939281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.104 qpair failed and we were unable to recover it. 00:39:15.104 [2024-07-22 19:43:33.939604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.939618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.939981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.939991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.940344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.940355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.940727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.940738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.941111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.941123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.941543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.941555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.941940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.941951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.942349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.942359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.942739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.942750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.943154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.943165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.943323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.943333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.943703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.943713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.944086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.944096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.944466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.944477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.944838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.944848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.945202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.945218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.945568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.945578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.945931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.945942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.946304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.946315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.946749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.946759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.947095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.947106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.947475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.947487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.947845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.947856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.948049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.948061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.948420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.948430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.948866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.948877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.949220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.949232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.949585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.949595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.949934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.949945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.950296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.950309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.950677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.950689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.951040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.951050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.951393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.951405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.951773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.951783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.952140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.952151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.952474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.952486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.952862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.952872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.105 [2024-07-22 19:43:33.953239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.105 [2024-07-22 19:43:33.953250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.105 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.953606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.953617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.953973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.953984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.954366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.954377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.954734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.954745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.955099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.955110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.955485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.955495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.955833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.955844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.956197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.956211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.956566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.956577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.956931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.956941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.957309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.957320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.957676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.957687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.958043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.958054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.958424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.958435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.958805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.958816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.959167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.959178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.959542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.959553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.959910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.959921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.960146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.960157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.960511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.960522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.960874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.960886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.961232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.961242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.961448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.961459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.961783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.961793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.962010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.962021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.962376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.962386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.962769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.962780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.963141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.963152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.963548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.963559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.963918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.963930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.964146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.964157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.964515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.964529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.964881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.964892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.965290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.965301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.965556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.965566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.965922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.965932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.966293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.106 [2024-07-22 19:43:33.966304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.106 qpair failed and we were unable to recover it. 00:39:15.106 [2024-07-22 19:43:33.966657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.966667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.967004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.967015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.967376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.967387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.967745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.967756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.968099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.968113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.968472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.968483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.968878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.968888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.969246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.969257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.969629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.969640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.970011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.970022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.970375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.970386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.970732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.970742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.971093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.971104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.971344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.971355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.971599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.971609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.971962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.971973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.972193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.972207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.972596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.972606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.972964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.972975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.973322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.973332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.973678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.973689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.974062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.974072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.974412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.974423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.974778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.974788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.975144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.975156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.975535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.975547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.975764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.975774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.976131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.976141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.976495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.976506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.976847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.976858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.977057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.977068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.977420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.977432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.977792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.977802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.978142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.978153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.978507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.978519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.978742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.978752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.979109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.979120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.107 [2024-07-22 19:43:33.979464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.107 [2024-07-22 19:43:33.979475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.107 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.979826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.979837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.980191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.980205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.980571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.980581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.980959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.980970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.981334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.981345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.981718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.981729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.982080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.982091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.982427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.982438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.982789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.982799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.983152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.983162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.983513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.983524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.983900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.983911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.984213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.984223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.984595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.984605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.984796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.984806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.985123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.985136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.985494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.985505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.985693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.985703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.986074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.986084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.986437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.986448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.986807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.986818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.987199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.987222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.987408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.987419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.987746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.987756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.988106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.988116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.988306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.988317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.988564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.988575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.988914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.988925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.989277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.989288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.989549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.989562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.989919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.989933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.990271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.990282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.990647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.990658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.991012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.991022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.991316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.991326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.991699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.991709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.108 [2024-07-22 19:43:33.992057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.108 [2024-07-22 19:43:33.992069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.108 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.992415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.992426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.992852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.992862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.993216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.993227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.993577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.993588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.993942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.993953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.994314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.994324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.994711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.994722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.995073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.995083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.995458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.995470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.995821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.995831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.996022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.996033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.996418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.996429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.996785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.996795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.997151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.997163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.997535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.997546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.997896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.997906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.998347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.998357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.998703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.998714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.999086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.999096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.999468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.999479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:33.999831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:33.999841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.000102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:34.000113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.000465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:34.000475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.000829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:34.000841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.001193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:34.001206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.001454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:34.001465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.001869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:34.001880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.002232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:34.002243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.002617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:34.002628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.002980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.109 [2024-07-22 19:43:34.002991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.109 qpair failed and we were unable to recover it. 00:39:15.109 [2024-07-22 19:43:34.003365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.003376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.003756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.003767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.004121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.004132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.004487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.004498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.004870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.004882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.005233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.005244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.005489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.005499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.005853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.005864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.006237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.006247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.006601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.006613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.006965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.006976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.007327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.007339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.007686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.007697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.008043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.008055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.008409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.008420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.008637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.008647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.009016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.009027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.009226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.009237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.009610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.009620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.009977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.009988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.010359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.010370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.010740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.010750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.011108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.011118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.011523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.011533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.011881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.011891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.012237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.012247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.012560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.012571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.012964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.012978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.013355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.013366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.013726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.013736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.013993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.014004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.014350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.014360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.014548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.014558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.014886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.014896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.015243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.015254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.015606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.110 [2024-07-22 19:43:34.015618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.110 qpair failed and we were unable to recover it. 00:39:15.110 [2024-07-22 19:43:34.015965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.015976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.016319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.016330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.016650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.016660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.017006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.017015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.017388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.017399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.017754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.017764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.018120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.018130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.018490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.018500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.018843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.018853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.019229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.019240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.019463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.019473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.019782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.019792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.020149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.020159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.020517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.020529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.020751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.020761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.021117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.021127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.021474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.021484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.021840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.021850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.022016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.022028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.022348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.022359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.022729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.022740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.023093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.023104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.023480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.023490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.023728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.023738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.024120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.024131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.024492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.024503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.024856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.024867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.025224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.025235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.025590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.025600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.025950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.025960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.026313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.026324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.026689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.026699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.026966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.026976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.027402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.027412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.027768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.027778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.028134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.028144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.111 [2024-07-22 19:43:34.028526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.111 [2024-07-22 19:43:34.028536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.111 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.028881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.028892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.029235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.029246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.029617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.029629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.030003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.030014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.030211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.030223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.030595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.030605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.030827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.030837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.031031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.031042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.031391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.031402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.112 [2024-07-22 19:43:34.031755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.112 [2024-07-22 19:43:34.031765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.112 qpair failed and we were unable to recover it. 00:39:15.390 [2024-07-22 19:43:34.032118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.390 [2024-07-22 19:43:34.032132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.390 qpair failed and we were unable to recover it. 00:39:15.390 [2024-07-22 19:43:34.032479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.390 [2024-07-22 19:43:34.032490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.390 qpair failed and we were unable to recover it. 00:39:15.390 [2024-07-22 19:43:34.032677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.390 [2024-07-22 19:43:34.032688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.390 qpair failed and we were unable to recover it. 00:39:15.390 [2024-07-22 19:43:34.033010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.390 [2024-07-22 19:43:34.033021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.390 qpair failed and we were unable to recover it. 00:39:15.390 [2024-07-22 19:43:34.033374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.390 [2024-07-22 19:43:34.033384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.390 qpair failed and we were unable to recover it. 00:39:15.390 [2024-07-22 19:43:34.033765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.390 [2024-07-22 19:43:34.033775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.390 qpair failed and we were unable to recover it. 00:39:15.390 [2024-07-22 19:43:34.034128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.390 [2024-07-22 19:43:34.034140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.390 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.034288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.034301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.034572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.034586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.034925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.034936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.035292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.035303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.035472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.035483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.035860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.035870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.036061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.036072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.036482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.036493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.036839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.036850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.037139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.037150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.037498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.037510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.037728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.037739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.038110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.038122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.038476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.038488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.038788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.038799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.039152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.039164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.039558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.039569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.039925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.039937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.040315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.040326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.040556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.040567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.040963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.040973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.041391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.041402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.041773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.041783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.042024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.042035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.042388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.042399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.042759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.042769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.043151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.043161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.043508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.043519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.043873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.043884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.044239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.044249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.044587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.044597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.044814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.044826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.045187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.045198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.045571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.045582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.045957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.045968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.046166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.391 [2024-07-22 19:43:34.046178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.391 qpair failed and we were unable to recover it. 00:39:15.391 [2024-07-22 19:43:34.046547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.046558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.046913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.046924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.047303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.047314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.047555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.047565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.047919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.047930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.048287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.048298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.048674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.048685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.049042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.049053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.049401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.049412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.049767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.049777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.050141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.050151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.050374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.050384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.050737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.050747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.051100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.051110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.051456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.051468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.051825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.051835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.052188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.052198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.052547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.052558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.052928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.052939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.053283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.053294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.053645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.053656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.054025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.054035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.054407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.054419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.054771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.054781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.055132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.055143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.055500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.055511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.055856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.055868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.056235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.056246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.056597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.056608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.056963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.056988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.057330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.057344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.057754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.057765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.058116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.058126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.058435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.058445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.058819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.058829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.392 [2024-07-22 19:43:34.059046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.392 [2024-07-22 19:43:34.059056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.392 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.059407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.059417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.059793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.059804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.060127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.060138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.060390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.060401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.060753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.060765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.061120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.061131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.061475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.061487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.061848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.061859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.062261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.062272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.062626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.062637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.062974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.062984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.063368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.063379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.063730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.063741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.063954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.063965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.064317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.064328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.064692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.064702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.065131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.065141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.065485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.065496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.065867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.065878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.066304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.066314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.066661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.066671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.067018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.067028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.067281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.067291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.067646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.067656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.068046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.068057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.068420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.068431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.068815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.068825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.069253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.069264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.069481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.069491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.069838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.069848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.070166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.070177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.070523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.070533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.070919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.070930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.071282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.071292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.071680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.071692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.071887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.071898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.393 [2024-07-22 19:43:34.072137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.393 [2024-07-22 19:43:34.072147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.393 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.072536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.072547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.072779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.072791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.073010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.073020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.073376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.073388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.073740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.073753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.074093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.074104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.074477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.074488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.074842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.074853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.075213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.075225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.075558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.075569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.075761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.075773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.076146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.076157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.076543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.076553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.076929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.076940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.077367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.077378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.077726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.077737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.078087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.078097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.078342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.078352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.078711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.078721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.079070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.079080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.079532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.079547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.079886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.079897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.080250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.080261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.080522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.080532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.080888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.080898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.081166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.081177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.081530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.081540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.081936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.081947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.082307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.082317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.082662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.082672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.082892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.082901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.083145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.083155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.083499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.083511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.083885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.083895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.084113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.084123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.084492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.084503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.084857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.084867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.394 qpair failed and we were unable to recover it. 00:39:15.394 [2024-07-22 19:43:34.085239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.394 [2024-07-22 19:43:34.085253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.085611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.085621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.085972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.085982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.086331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.086342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.086722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.086733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.087088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.087098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.087479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.087491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.087845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.087856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.088241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.088251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.088604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.088615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.088972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.088982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.089335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.089346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.089703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.089714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.090070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.090080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.090458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.090469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.090822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.090832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.091212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.091223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.091569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.091579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.091931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.091942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.092293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.092304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.092662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.092672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.093015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.093026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.093417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.093429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.093782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.093792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.094145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.094156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.094546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.094557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.094915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.094925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.095281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.095292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.095637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.095648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.096000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.096010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.096364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.395 [2024-07-22 19:43:34.096376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.395 qpair failed and we were unable to recover it. 00:39:15.395 [2024-07-22 19:43:34.096728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.096738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.097110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.097120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.097427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.097439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.097783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.097794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.098148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.098159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.098535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.098545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.098764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.098774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.099128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.099139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.099486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.099497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.099870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.099883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.100238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.100249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.100594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.100605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.100957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.100967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.101303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.101313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.101516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.101526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.101891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.101902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.102260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.102275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.102663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.102673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.103017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.103028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.103381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.103392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.103744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.103754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.104128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.104138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.104559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.104569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.104916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.104927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.105279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.105290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.105547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.105557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.105909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.105920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.106346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.106357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.106703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.106715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.107094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.107105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.107477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.107488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.107842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.107853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.108208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.108220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.108558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.108568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.108923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.108934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.109353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.109364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.109555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.109566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.396 [2024-07-22 19:43:34.109917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.396 [2024-07-22 19:43:34.109928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.396 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.110278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.110289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.110350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.110360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.110678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.110688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.111042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.111053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.111428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.111440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.111631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.111643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.112017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.112029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.112381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.112393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.112733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.112743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.113136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.113146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.113502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.113514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.113873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.113885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.114075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.114086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.114476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.114486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.114839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.114850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.115205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.115215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.115541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.115552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.115895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.115905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.116269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.116280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.116633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.116644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.117016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.117026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.117379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.117389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.117739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.117749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.118105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.118115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.118475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.118486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.118838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.118850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.119250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.119261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.119616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.119627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.119843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.119855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.120223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.120235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.120584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.120595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.120946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.120956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.121333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.121344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.121730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.121742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.122102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.122112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.122488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.122500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.122880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.397 [2024-07-22 19:43:34.122891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.397 qpair failed and we were unable to recover it. 00:39:15.397 [2024-07-22 19:43:34.123239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.123250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.123477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.123488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.123726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.123736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.124120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.124130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.124445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.124457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.124575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.124591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.124858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.124868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.125238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.125249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.125619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.125629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.125837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.125847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.126189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.126205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.126411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.126423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.126778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.126789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.127140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.127151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.127508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.127521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.127896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.127907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.128303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.128314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.128660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.128670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.129024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.129035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.129366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.129377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.129747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.129757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.130110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.130120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.130466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.130477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.130849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.130860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.131214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.131224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.131449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.131460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.131813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.131824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.132079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.132089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.132522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.132533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.132884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.132894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.133291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.133302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.133686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.133697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.134007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.134018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.134365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.134376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.134624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.134634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.135017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.135027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.135380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.135391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.135740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.398 [2024-07-22 19:43:34.135750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.398 qpair failed and we were unable to recover it. 00:39:15.398 [2024-07-22 19:43:34.136102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.136113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.136443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.136453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.136806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.136817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.137172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.137183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.137525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.137536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.137920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.137931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.138288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.138299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.138669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.138680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.139032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.139043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.139417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.139428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.139784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.139794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.140017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.140027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.140359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.140370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.140737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.140748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.140946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.140957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.141331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.141342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.141708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.141721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.142105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.142116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.142483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.142494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.142847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.142858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.143116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.143126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.143474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.143486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.143803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.143815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.144165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.144176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.144547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.144558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.144933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.144943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.145303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.145314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.145679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.145690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.146001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.146011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.146377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.146387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.146740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.146750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.146968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.146981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.147391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.147402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.147773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.147784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.148138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.148148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.148346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.148357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.148728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.148738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.399 qpair failed and we were unable to recover it. 00:39:15.399 [2024-07-22 19:43:34.149116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.399 [2024-07-22 19:43:34.149126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.149482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.149493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.149688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.149698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.150010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.150021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.150395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.150405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.150599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.150610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.150985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.150996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.151348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.151359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.151710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.151721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.152074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.152085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.152444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.152454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.152812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.152823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.153161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.153172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.153521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.153532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.153883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.153895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.154247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.154258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.154600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.154611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.154852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.154862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.155235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.155246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.155656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.155669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.156041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.156052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.156406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.156417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.156760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.156770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.157120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.157131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.157480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.157491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.157873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.157884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.158244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.158254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.158618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.158630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.159012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.159022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.159383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.159394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.159755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.400 [2024-07-22 19:43:34.159765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.400 qpair failed and we were unable to recover it. 00:39:15.400 [2024-07-22 19:43:34.160114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.160124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.160484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.160495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.160852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.160863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.161209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.161219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.161460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.161470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.161834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.161844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.162196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.162211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.162438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.162449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.162647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.162657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.162981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.162991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.163142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.163153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.163523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.163534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.163893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.163903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.164296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.164306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.164682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.164693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.164884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.164895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.165233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.165244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.165638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.165649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.166007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.166018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.166382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.166394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.166801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.166811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.167185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.167195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.167593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.167604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.167850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.167860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.168212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.168222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.168618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.168629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.168979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.168990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.169308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.169323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.169691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.169704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.170074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.170084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.170466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.170477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.170784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.170796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.171125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.171135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.171481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.171492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.171840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.171850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.401 [2024-07-22 19:43:34.172074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.401 [2024-07-22 19:43:34.172084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.401 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.172474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.172485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.172862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.172873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.173232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.173243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.173614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.173625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.173978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.173989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.174362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.174373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.174733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.174744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.174964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.174975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.175332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.175343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.175679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.175690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.176046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.176057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.176410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.176422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.176801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.176811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.177191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.177208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.177572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.177582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.177934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.177944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.178296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.178307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.178526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.178537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.178699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.178709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.179035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.179045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.179261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.179272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.179617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.179627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.179972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.179983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.180355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.180366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.180725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.180736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.181099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.181110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.181477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.181488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.181878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.181889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.182249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.182261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.182644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.182655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.183005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.183016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.183359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.183370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.183742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.183754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.184126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.184136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.184491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.184503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.184866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.184877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.185240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.402 [2024-07-22 19:43:34.185251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.402 qpair failed and we were unable to recover it. 00:39:15.402 [2024-07-22 19:43:34.185575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.185586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.185971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.185981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.186343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.186354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.186717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.186728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.187107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.187117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.187491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.187502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.187862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.187873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.188301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.188313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.188688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.188698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.189058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.189069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.189317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.189327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.189689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.189700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.190073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.190083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.190469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.190480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.190675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.190687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.191019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.191030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.191421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.191432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.191874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.191884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.192243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.192257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.192618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.192629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.193021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.193031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.193267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.193278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.193610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.193621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.193981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.193992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.194376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.194386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.194744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.194755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.195152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.195162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.195506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.195516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.195891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.195903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.196256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.196267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.196618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.196629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.197064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.197075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.197428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.197438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.197797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.197808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.198157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.198168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.198411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.198424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.198798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.198808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.403 [2024-07-22 19:43:34.199160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.403 [2024-07-22 19:43:34.199170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.403 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.199515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.199526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.199744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.199755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.200128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.200139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.200498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.200509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.200864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.200874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.201246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.201258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.201478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.201488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.201850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.201862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.202214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.202225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.202637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.202648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.202987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.202997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.203428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.203439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.203784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.203795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.204041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.204051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.204423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.204433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.204785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.204796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.205147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.205158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.205501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.205512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.205884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.205894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.206184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.206194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.206568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.206579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.206813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.206824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.207159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.207170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.207532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.207543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.207910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.207921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.208273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.208284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.208468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.208478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.208796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.208807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.209160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.209170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.209512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.209523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.209909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.404 [2024-07-22 19:43:34.209920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.404 qpair failed and we were unable to recover it. 00:39:15.404 [2024-07-22 19:43:34.210238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.210250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.210624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.210634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.210991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.211001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.211381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.211392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.211748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.211759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.212118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.212128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.212501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.212513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.212895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.212906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.213219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.213231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.213601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.213611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.213969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.213980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.214351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.214361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.214584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.214595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.214949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.214964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.215317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.215327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.215699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.215710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.216060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.216070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.216295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.216305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.216668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.216679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.217059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.217070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.217419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.217430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.217787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.217798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.218019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.218029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.218297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.218307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.218666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.218677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.219034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.219045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.219400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.219410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.219761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.219772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.220124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.220135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.220500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.220511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.220862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.220873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.221248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.221259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.221462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.221472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.221823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.221835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.222190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.222204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.222570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.222580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.222934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.222944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.405 [2024-07-22 19:43:34.223291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.405 [2024-07-22 19:43:34.223302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.405 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.223656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.223666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.223862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.223872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.224277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.224288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.224658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.224668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.225023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.225034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.225414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.225425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.225773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.225783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.226135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.226145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.226504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.226515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.226891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.226901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.227253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.227264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.227621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.227632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.228028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.228039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.228353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.228364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.228758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.228768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.229122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.229133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.229492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.229502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.229875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.229886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.230240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.230251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.230606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.230616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.230970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.230980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.231357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.231368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.231781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.231791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.232130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.232141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.232497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.232508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.232881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.232892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.233109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.233119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.233489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.233499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.233854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.233864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.234204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.234216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.234579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.234589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.234980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.234990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.235338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.235348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.235742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.235752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.236105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.236115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.236519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.236532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.236875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.406 [2024-07-22 19:43:34.236885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.406 qpair failed and we were unable to recover it. 00:39:15.406 [2024-07-22 19:43:34.237265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.237276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.237632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.237644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.238005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.238019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.238377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.238388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.238772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.238782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.239136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.239146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.239486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.239497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.239851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.239861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.240236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.240247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.240706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.240716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.241072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.241082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.241448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.241459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.241831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.241841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.242194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.242207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.242570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.242582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.242933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.242943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.243322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.243334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.243681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.243692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.244046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.244057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.244412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.244423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.244799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.244810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.245157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.245168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.245514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.245525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.245874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.245885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.246260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.246271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.246629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.246640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.246832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.246843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.247152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.247163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.247536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.247546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.247898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.247908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.248261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.248273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.248643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.248654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.249045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.249056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.249297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.249308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.249651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.249661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.249855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.249865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.407 qpair failed and we were unable to recover it. 00:39:15.407 [2024-07-22 19:43:34.250196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.407 [2024-07-22 19:43:34.250210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.250571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.250581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.250934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.250947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.251293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.251305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.251682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.251692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.251910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.251920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.252289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.252299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.252669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.252681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.252875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.252887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.253209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.253221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.253557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.253567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.253877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.253896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.254278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.254289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.254652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.254663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.255017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.255028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.255384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.255396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.255757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.255768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.256123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.256134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.256492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.256504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.256860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.256871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.257252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.257263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.257552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.257562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.257916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.257927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.258274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.258285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.258645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.258655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.259038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.259048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.259401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.259411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.259765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.259777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.259859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.259869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.260257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.260268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.260627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.260642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.261003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.261013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.261369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.261380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.261583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.261593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.261652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.261661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.261994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.262005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.262229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.262240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.262602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.408 [2024-07-22 19:43:34.262612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.408 qpair failed and we were unable to recover it. 00:39:15.408 [2024-07-22 19:43:34.262990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.263000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.263360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.263371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.263730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.263741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.264099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.264109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.264249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.264262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.264622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.264633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.264987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.264998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.265245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.265256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.265430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.265441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.265806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.265817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.266172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.266183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.266525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.266535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.266914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.266925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.267282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.267293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.267611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.267623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.267978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.267988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.268359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.268369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.268721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.268731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.269044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.269054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.269405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.269416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.269797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.269807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.270159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.270170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.270519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.270531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.270887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.270899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.271242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.271252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.271482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.271492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.271849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.271860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.272256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.272267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.272642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.272652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.409 qpair failed and we were unable to recover it. 00:39:15.409 [2024-07-22 19:43:34.273049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.409 [2024-07-22 19:43:34.273060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.273415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.273426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.273792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.273803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.274175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.274186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.274532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.274544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.274743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.274756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.275128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.275139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.275483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.275494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.275882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.275893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.276250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.276261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.276581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.276592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.276780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.276791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.277122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.277132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.277321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.277332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.277650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.277660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.278040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.278053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.278408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.278419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.278766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.278776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.279129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.279139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.279494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.279504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.279856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.279867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.280226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.280237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.280585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.280596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.280975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.280985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.281212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.281223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.281579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.281590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.281935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.281946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.282317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.282332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.282692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.282702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.283066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.283076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.283464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.283475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.283785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.283796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.284149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.284160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.284516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.284527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.284951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.284962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.285304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.285315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.285668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.285679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.286079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.410 [2024-07-22 19:43:34.286089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.410 qpair failed and we were unable to recover it. 00:39:15.410 [2024-07-22 19:43:34.286465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.286476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.286850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.286861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.287237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.287249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.287519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.287530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.287880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.287891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.288093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.288105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.288440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.288452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.288809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.288819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.289170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.289180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.289564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.289575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.289927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.289937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.290155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.290165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.290509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.290519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.290894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.290905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.291258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.291269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.291628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.291639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.291993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.292004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.292355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.292367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.292814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.292824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.293176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.293187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.293376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.293387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.293772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.293783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.294138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.294149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.294505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.294515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.294869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.294879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.295256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.295267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.295625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.295636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.295986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.295996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.296212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.296223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.296564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.296574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.296928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.296938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.297292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.297303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.297669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.297680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.298057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.298068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.298417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.298428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.298783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.298795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.299149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.299159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.411 qpair failed and we were unable to recover it. 00:39:15.411 [2024-07-22 19:43:34.299541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.411 [2024-07-22 19:43:34.299552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.299908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.299919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.300271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.300283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.300636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.300647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.301021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.301031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.301385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.301396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.301752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.301764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.302057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.302068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.302447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.302458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.302832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.302842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.303194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.303208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.303566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.303577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.303949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.303959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.304412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.304446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.304669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.304682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.305034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.305045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.305419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.305435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.305778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.305790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.306147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.306158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.306565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.306577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.306949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.306962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.307206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.307217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.307448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.307459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.307812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.307822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.308210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.308222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.308334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.308345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.308708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.308718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.309070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.309081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.309461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.309473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.309857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.309868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.310221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.310232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.310583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.310594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.310941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.310952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.311330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.311341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.311702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.311713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.312024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.312036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.312329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.312341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.312722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.412 [2024-07-22 19:43:34.312733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.412 qpair failed and we were unable to recover it. 00:39:15.412 [2024-07-22 19:43:34.313037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.313048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.313401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.313412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.313605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.313616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.313947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.313958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.314310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.314321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.314579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.314589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.314794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.314804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.314986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.314998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.315359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.315369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.315716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.315726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.315942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.315953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.316325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.316336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.316680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.316691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.317043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.317054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.317410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.317422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.317755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.317765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.318172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.318182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.318607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.318618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.318962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.318972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.319346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.319358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.319709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.319719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.320070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.320081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.320426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.320439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.320845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.320855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.321223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.321234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.321586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.321596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.321948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.321958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.322365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.322376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.322721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.322731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.323082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.323091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.323462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.323473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.323892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.323903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.324147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.324158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.324518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.324529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.324886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.324896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.325275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.325287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.413 [2024-07-22 19:43:34.325638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.413 [2024-07-22 19:43:34.325649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.413 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.325994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.326006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.326264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.326278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.326653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.326664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.326863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.326876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.327244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.327256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.327630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.327646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.327836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.327847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.328212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.328224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.328595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.328608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.328868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.328879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.329071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.329083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.329448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.329460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.329652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.329664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.329863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.329875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.330207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.330219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.330576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.330587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.330780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.330793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.331168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.331179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.331544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.331555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.331908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.331920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.332275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.332286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.332639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.332650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.700 [2024-07-22 19:43:34.333017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.700 [2024-07-22 19:43:34.333028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.700 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.333375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.333387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.333757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.333769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.334120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.334133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.334483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.334494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.334839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.334851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.335159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.335170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.335532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.335544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.335922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.335933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.336277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.336289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.336645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.336656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.337010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.337021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.337229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.337241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.337584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.337595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.337945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.337956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.338306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.338318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.338707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.338717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.339118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.339129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.339485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.339497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.339850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.339860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.340173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.340185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.340524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.340535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.340887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.340898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.341251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.341262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.341641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.341653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.701 [2024-07-22 19:43:34.342009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.701 [2024-07-22 19:43:34.342019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.701 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.342375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.342385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.342731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.342742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.343121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.343131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.343441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.343469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.343878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.343888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.344243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.344255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.344595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.344605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.344966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.344977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.345159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.345169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.345490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.345501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.345720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.345731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.346085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.346096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.346462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.346473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.346818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.346829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.347207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.347217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.347553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.347563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.347917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.347928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.348284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.348296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.348623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.348634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.348989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.348999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.349366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.349378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.349732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.349746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.350125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.350136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.702 qpair failed and we were unable to recover it. 00:39:15.702 [2024-07-22 19:43:34.350493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.702 [2024-07-22 19:43:34.350504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.350855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.350865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.351221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.351232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.351572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.351583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.351935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.351946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.352303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.352313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.352672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.352682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.352873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.352883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.353213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.353223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.353591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.353601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.353955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.353966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.354159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.354170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.354545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.354556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.354908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.354918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.355267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.355278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.355449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.355460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.355821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.355831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.356183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.356193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.356545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.356556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.356895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.356905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.357251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.357261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.357621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.357632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.703 [2024-07-22 19:43:34.357983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.703 [2024-07-22 19:43:34.357994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.703 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.358366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.358377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.358719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.358730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.359123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.359134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.359490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.359500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.359875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.359886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.360274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.360285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.360639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.360649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.360990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.361000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.361405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.361416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.361766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.361778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.362174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.362184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.362529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.362541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.362948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.362959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.363340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.363351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.363709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.363720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.364073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.364083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.364432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.364444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.364803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.364813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.365167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.365178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.365530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.365542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.365912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.365923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.366162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.366173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.366572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.366583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.366934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.366945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.367320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.367330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.367683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.367694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.704 qpair failed and we were unable to recover it. 00:39:15.704 [2024-07-22 19:43:34.368096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.704 [2024-07-22 19:43:34.368107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.368485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.368496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.368878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.368888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.369107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.369118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.369496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.369507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.369862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.369873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.370247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.370258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.370614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.370624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.370978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.370988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.371340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.371350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.371695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.371705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.372059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.372069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.372425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.372436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.372798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.372812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.373197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.373215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.373568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.373580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.373933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.373943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.374299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.374310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.374509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.374521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.374806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.374817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.375190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.375204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.375573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.375583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.375975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.375985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.376360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.376372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.376727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.376737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.705 [2024-07-22 19:43:34.377089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.705 [2024-07-22 19:43:34.377103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.705 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.377457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.377468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.377821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.377831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.378185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.378195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.378549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.378559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.378932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.378942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.379323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.379334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.379692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.379703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.380054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.380066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.380292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.380303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.380633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.380644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.380996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.381007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.381362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.381373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.381725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.381736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.382091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.382103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.382489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.382500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.382855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.382866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.383242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.383253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.383475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.383486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.383839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.383850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.384208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.384219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.384543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.384553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.384907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.384917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.706 [2024-07-22 19:43:34.385275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.706 [2024-07-22 19:43:34.385286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.706 qpair failed and we were unable to recover it. 00:39:15.707 [2024-07-22 19:43:34.385645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.385656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.385879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.385890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.386262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.386273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.386629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.386639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.386993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.387004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.387218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.387229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.387470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.387480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.387834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.387844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.388197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.388212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.388434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.388448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.388804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.388815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.389168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.389179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.389532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.389543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.389918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.389928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.390280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.390290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.390642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.390652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.391004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.391016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.391342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.391354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.391710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.391720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.392035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.392046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.392399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.392409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.392782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.708 [2024-07-22 19:43:34.392793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.708 qpair failed and we were unable to recover it. 00:39:15.708 [2024-07-22 19:43:34.393142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.393153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.393508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.393519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.393870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.393881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.394221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.394232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.394593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.394604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.394960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.394970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.395327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.395342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.395698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.395708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.395939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.395950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.396272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.396283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.396720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.396730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.397072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.397082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.397460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.397472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.397815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.397826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.398183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.398195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.398546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.398557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.398910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.398920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.399263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.399275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.399634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.399644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.400030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.400041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.400397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.400408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.400759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.400770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.400988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.400998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.401369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.401380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.401695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.401706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.402014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.402024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.402377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.402389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.709 [2024-07-22 19:43:34.402752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.709 [2024-07-22 19:43:34.402762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.709 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.403116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.403126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.403528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.403539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.403895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.403905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.404280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.404291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.404642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.404653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.404998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.405008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.405352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.405363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.405724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.405735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.406086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.406098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.406483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.406493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.406849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.406859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.407230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.407241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.407594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.407605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.407958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.407968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.408327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.408339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.408696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.408707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.409053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.409063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.409459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.409470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.409690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.409700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.410061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.410071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.410422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.410433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.410782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.410793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.411147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.411158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.411532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.411543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.411902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.411913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.412269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.412281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.412632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.412643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.413014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.413025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.413402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.413413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.413766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.413776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.413968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.413979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.414301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.414312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.414680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.414699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.415053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.415066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.415325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.415336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.415789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.415800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.416108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.416119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.416462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.416473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.416824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.416834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.417214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.417225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.417590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.417601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.417946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.417957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.418309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.418335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.418684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.418694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.419042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.419053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.419405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.419417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.419770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.419781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.420160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.420172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.420521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.710 [2024-07-22 19:43:34.420532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.710 qpair failed and we were unable to recover it. 00:39:15.710 [2024-07-22 19:43:34.420883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.420894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.421258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.421269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.421616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.421626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.421832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.421843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.422078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.422088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.422463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.422474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.422849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.422861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.423082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.423092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.423435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.423446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.423800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.423811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.424185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.424195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.424574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.424585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.424935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.424946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.425298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.425309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.425678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.425690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.426041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.426051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.426414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.426426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.426780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.426791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.427161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.427172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.427518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.427529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.427879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.427891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.428241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.428252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.428460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.428470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.428794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.428804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.429151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.429164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.429512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.429523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.429870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.429880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.430188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.430199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.430549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.430560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.430904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.430915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.431236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.431248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.431611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.431621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.431940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.431951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.432206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.432217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.432586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.432597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.432949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.432960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.433314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.711 [2024-07-22 19:43:34.433325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.711 qpair failed and we were unable to recover it. 00:39:15.711 [2024-07-22 19:43:34.433673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.433684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.434061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.434074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.434265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.434276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.434649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.434660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.435011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.435021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.435404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.435416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.435849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.435859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.436234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.436245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.436599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.436610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.436954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.436964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.437314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.437325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.437547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.437558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.437917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.437927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.438264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.438275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.438630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.438641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.439003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.439013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.439374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.439386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.439760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.439771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.440121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.440132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.440438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.440449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.440801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.440815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.441195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.441211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.441535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.441545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.441897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.441909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.442255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.442265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.442651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.442662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.442904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.442914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.443308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.443321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.443691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.443701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.444073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.444084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.444458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.444469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.444835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.444845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.445205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.445217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.445568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.445578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.445937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.445947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.446334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.446345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.446689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.446700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.447077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.447087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.447457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.447467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.447814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.447825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.448184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.448195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.448573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.448584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.448935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.448946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.449118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.449130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.449474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.449485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.449859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.449870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.450240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.450251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.450681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.450692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.451044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.451055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.451430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.451441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.451797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.451807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.452158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.452168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.452234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.712 [2024-07-22 19:43:34.452245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.712 qpair failed and we were unable to recover it. 00:39:15.712 [2024-07-22 19:43:34.452523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.452534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.452729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.452740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.453138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.453148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.453528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.453539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.453882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.453892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.454266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.454277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.454624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.454635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.454985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.454996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.455350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.455361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.455724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.455734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.455960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.455970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.456333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.456344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.456703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.456714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.457043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.457054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.457416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.457429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.457785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.457795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.458156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.458166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.458368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.458379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.458750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.458760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.459114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.459125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.459484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.459495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.459867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.459878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.460082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.460093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.460484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.460495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.460846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.460856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.461232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.461242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.461454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.461464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.461817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.461827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.462178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.462189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.462553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.462564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.462762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.462773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.463145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.463160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.463588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.463599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.463975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.463985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.464383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.464393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.464705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.464714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.464969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.464979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.465354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.465366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.465717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.465728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.466078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.466088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.466434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.466446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.466821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.466832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.467180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.467191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.467544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.467555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.467907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.467918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.468308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.468318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.468688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.468698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.469055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.469065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.469417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.469429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.469810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.469820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.470177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.470187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.470461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.470472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.470692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.470704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.471092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.713 [2024-07-22 19:43:34.471102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.713 qpair failed and we were unable to recover it. 00:39:15.713 [2024-07-22 19:43:34.471477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.471490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.471849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.471860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.472082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.472093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.472271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.472282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.472663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.472674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.473025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.473036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.473400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.473410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.473800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.473810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.474163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.474174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.474441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.474451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.474807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.474817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.475194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.475208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.475572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.475582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.475935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.475946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.476381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.476392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.476736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.476747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.477101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.477111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.477470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.477482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.477699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.477710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.478083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.478093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.478291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.478303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.478630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.478642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.479027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.479038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.479230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.479242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.479612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.479624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.479813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.479823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.480168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.480178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.480554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.480564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.480917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.480928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.481281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.481291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.481647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.481657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.482033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.482044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.482242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.482254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.482576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.482587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.482850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.482861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.483206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.483217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.483388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.483399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.483773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.483783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.484136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.484147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.484508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.484519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.484946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.484958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.485304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.485320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.485620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.485630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.486007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.486018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.486373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.486385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.486725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.486736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.487111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.487123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.487489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.487499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.487874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.487885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.488155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.488167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.488538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.488550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.488896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.488907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.489274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.489286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.714 qpair failed and we were unable to recover it. 00:39:15.714 [2024-07-22 19:43:34.489662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.714 [2024-07-22 19:43:34.489675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.490052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.490063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.490437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.490448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.490667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.490677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.491029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.491040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.491394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.491405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.491778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.491789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.492143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.492154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.492500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.492512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.492865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.492876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.493256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.493267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.493630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.493641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.494002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.494012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.494236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.494247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.494604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.494615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.494966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.494977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.495320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.495331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.495706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.495716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.495918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.495928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.496279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.496290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.496643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.496653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.497004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.497015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.497389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.497399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.497722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.497733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.498084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.498095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.498357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.498367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.498731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.498741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.499099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.499111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.499375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.499385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.499736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.499746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.500122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.500132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.500460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.500480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.500831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.500841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.501192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.501207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.501545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.501556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.501921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.501932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.502287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.502298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.502611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.715 [2024-07-22 19:43:34.502622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.715 qpair failed and we were unable to recover it. 00:39:15.715 [2024-07-22 19:43:34.502848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.502860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.503221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.503231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.503589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.503600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.503966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.503976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.504355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.504365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.504715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.504725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.505083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.505093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.505464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.505476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.505854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.505864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.506221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.506232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.506601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.506612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.506960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.506971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.507315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.507326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.507683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.507694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.508086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.508101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.508476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.508487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.508858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.508868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.509291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.509301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.509644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.509654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.716 qpair failed and we were unable to recover it. 00:39:15.716 [2024-07-22 19:43:34.510011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.716 [2024-07-22 19:43:34.510021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.510397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.510408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.510762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.510774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.511127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.511137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.511490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.511501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.511839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.511849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.512197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.512211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.512466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.512476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.512829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.512839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.513213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.513224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.513561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.513573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.513917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.513928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.514284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.514294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.514664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.514674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.717 [2024-07-22 19:43:34.515027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.717 [2024-07-22 19:43:34.515037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.717 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.515392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.515404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.515759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.515770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.516157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.516168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.516511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.516523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.516712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.516722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.517094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.517104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.517456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.517467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.517819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.517831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.518018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.518029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.518385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.518397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.518772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.518783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.519135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.519147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.519459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.519471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.519719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.519730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.520079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.520090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.520410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.520421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.520836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.520848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.521205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.521217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.521569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.521580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.521935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.521945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.522132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.522142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.522383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.718 [2024-07-22 19:43:34.522394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.718 qpair failed and we were unable to recover it. 00:39:15.718 [2024-07-22 19:43:34.522719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.522730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.522922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.522934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.523271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.523282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.523647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.523657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.524030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.524041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.524380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.524390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.524813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.524823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.525172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.525182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.525563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.525574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.525917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.525928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.526144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.526154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.719 qpair failed and we were unable to recover it. 00:39:15.719 [2024-07-22 19:43:34.526511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.719 [2024-07-22 19:43:34.526522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.720 [2024-07-22 19:43:34.526892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.720 [2024-07-22 19:43:34.526904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.720 [2024-07-22 19:43:34.527256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.720 [2024-07-22 19:43:34.527270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.720 [2024-07-22 19:43:34.527664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.720 [2024-07-22 19:43:34.527675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.720 [2024-07-22 19:43:34.528032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.720 [2024-07-22 19:43:34.528043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.720 [2024-07-22 19:43:34.528267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.720 [2024-07-22 19:43:34.528277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.720 [2024-07-22 19:43:34.528667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.720 [2024-07-22 19:43:34.528677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.720 [2024-07-22 19:43:34.529033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.720 [2024-07-22 19:43:34.529044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.720 [2024-07-22 19:43:34.529390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.720 [2024-07-22 19:43:34.529400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.720 [2024-07-22 19:43:34.529770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.720 [2024-07-22 19:43:34.529780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.720 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.530172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.530183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.530432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.530446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.530803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.530813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.531188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.531198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.531569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.531581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.531934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.531944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.532176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.532186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.532551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.532562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.532912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.532922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.533279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.533290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.533644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.533656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.534030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.534041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.534396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.534407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.534761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.534771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.535177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.535188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.721 [2024-07-22 19:43:34.535546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.721 [2024-07-22 19:43:34.535556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.721 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.535952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.535962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.536416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.536452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.536818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.536832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.537177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.537188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.537555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.537566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.537923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.537934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.538241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.538253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.538627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.538637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.538990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.539002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.539346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.539357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.539694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.539704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.540075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.540087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.540463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.540474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.540827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.540838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.541190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.541206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.541547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.541558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.541890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.541903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.542257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.542268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.542674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.542684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.543053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.543064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.543419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.543430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.543591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.543602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.543964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.543974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.544227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.544237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.544563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.544574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.544930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.544941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.545303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.545315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.545697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.545708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.546060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.546071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.546413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.546425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.546783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.546794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.547172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.547183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.547504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.547515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.547868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.547879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.548232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.548243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.548518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.548530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.548881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.548892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.549249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.549259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.722 [2024-07-22 19:43:34.549631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.722 [2024-07-22 19:43:34.549641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.722 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.550009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.550020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.550241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.550252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.550604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.550614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.550966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.550979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.551357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.551367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.551718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.551729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.552093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.552104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.552479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.552489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.552859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.552870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.553221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.553232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.553582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.553597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.553954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.553966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.554158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.554170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.554473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.554484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.554924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.554935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.555307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.555319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.555667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.555678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.556022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.556034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.556231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.556243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.556561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.556571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.556763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.556774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.556953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.556965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.557293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.557303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.557664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.557674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.557866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.557877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.558233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.558244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.558453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.558465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.558635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.558646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.558992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.559002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.559364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.559376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.559729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.559740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.559939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.559950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.560178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.560188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.560404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.723 [2024-07-22 19:43:34.560415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.723 qpair failed and we were unable to recover it. 00:39:15.723 [2024-07-22 19:43:34.560782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.724 [2024-07-22 19:43:34.560792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.724 qpair failed and we were unable to recover it. 00:39:15.724 [2024-07-22 19:43:34.561145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.724 [2024-07-22 19:43:34.561155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.724 qpair failed and we were unable to recover it. 00:39:15.724 [2024-07-22 19:43:34.561565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.724 [2024-07-22 19:43:34.561577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.724 qpair failed and we were unable to recover it. 00:39:15.724 [2024-07-22 19:43:34.561927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.724 [2024-07-22 19:43:34.561938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.724 qpair failed and we were unable to recover it. 00:39:15.724 [2024-07-22 19:43:34.562290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.724 [2024-07-22 19:43:34.562300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.724 qpair failed and we were unable to recover it. 00:39:15.724 [2024-07-22 19:43:34.562671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.724 [2024-07-22 19:43:34.562682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.724 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.563053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.563063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.563413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.563424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.563778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.563789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.564132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.564142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.564494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.564506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.564856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.564866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.565218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.565229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.565591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.565602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.565974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.565985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.725 [2024-07-22 19:43:34.566337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.725 [2024-07-22 19:43:34.566348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.725 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.566573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.566584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.566806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.566816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.567169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.567180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.567532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.567542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.567894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.567906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.568130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.568141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.568501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.568512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.568869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.568882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.569101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.569112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.569291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.569303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.569687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.569698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.569944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.569955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.570310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.570321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.570670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.570680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.571056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.571067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.571419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.571430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.571784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.571794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.572102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.572113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.572385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.572395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.572785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.572795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.573145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.573155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.726 qpair failed and we were unable to recover it. 00:39:15.726 [2024-07-22 19:43:34.573500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.726 [2024-07-22 19:43:34.573511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.573884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.573895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.574246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.574257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.574678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.574692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.575046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.575056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.575370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.575380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.575591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.575601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.575938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.575949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.576292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.576304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.576656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.576666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.577042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.577053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.577411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.577422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.577775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.577786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.578143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.578154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.578496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.578507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.578860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.578871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.579233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.579244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.579599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.579610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.579982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.579993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.580216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.580226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.580584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.580594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.580947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.727 [2024-07-22 19:43:34.580957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.727 qpair failed and we were unable to recover it. 00:39:15.727 [2024-07-22 19:43:34.581327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.581338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.581683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.581694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.582049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.582060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.582497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.582508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.582884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.582896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.583242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.583259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.583622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.583634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.583984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.583994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.584347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.584358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.584705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.584715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.584944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.584954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.728 [2024-07-22 19:43:34.585299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.728 [2024-07-22 19:43:34.585310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.728 qpair failed and we were unable to recover it. 00:39:15.729 [2024-07-22 19:43:34.585632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.729 [2024-07-22 19:43:34.585643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.729 qpair failed and we were unable to recover it. 00:39:15.729 [2024-07-22 19:43:34.585995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.729 [2024-07-22 19:43:34.586005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.729 qpair failed and we were unable to recover it. 00:39:15.729 [2024-07-22 19:43:34.586358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.729 [2024-07-22 19:43:34.586368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.729 qpair failed and we were unable to recover it. 00:39:15.729 [2024-07-22 19:43:34.586683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.729 [2024-07-22 19:43:34.586693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.729 qpair failed and we were unable to recover it. 00:39:15.729 [2024-07-22 19:43:34.587071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.729 [2024-07-22 19:43:34.587082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.729 qpair failed and we were unable to recover it. 00:39:15.729 [2024-07-22 19:43:34.587432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.729 [2024-07-22 19:43:34.587443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.729 qpair failed and we were unable to recover it. 00:39:15.729 [2024-07-22 19:43:34.587800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.729 [2024-07-22 19:43:34.587812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.729 qpair failed and we were unable to recover it. 00:39:15.729 [2024-07-22 19:43:34.588165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.729 [2024-07-22 19:43:34.588176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.729 qpair failed and we were unable to recover it. 00:39:15.729 [2024-07-22 19:43:34.588548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.729 [2024-07-22 19:43:34.588559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.588786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.588795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.589151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.589162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.589383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.589394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.589613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.589624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.589937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.589948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.590298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.590310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.590686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.590697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.591071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.591082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.591301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.591312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.730 qpair failed and we were unable to recover it. 00:39:15.730 [2024-07-22 19:43:34.591683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.730 [2024-07-22 19:43:34.591693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.731 [2024-07-22 19:43:34.592044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.731 [2024-07-22 19:43:34.592057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.731 [2024-07-22 19:43:34.592435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.731 [2024-07-22 19:43:34.592446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.731 [2024-07-22 19:43:34.592876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.731 [2024-07-22 19:43:34.592887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.731 [2024-07-22 19:43:34.593231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.731 [2024-07-22 19:43:34.593242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.731 [2024-07-22 19:43:34.593465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.731 [2024-07-22 19:43:34.593476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.731 [2024-07-22 19:43:34.593852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.731 [2024-07-22 19:43:34.593863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.731 [2024-07-22 19:43:34.594291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.731 [2024-07-22 19:43:34.594301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.731 [2024-07-22 19:43:34.594545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.731 [2024-07-22 19:43:34.594555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.731 [2024-07-22 19:43:34.594879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.731 [2024-07-22 19:43:34.594889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.731 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.595263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.595274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.595649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.595659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.595857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.595868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.596114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.596124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.596473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.596485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.596686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.596701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.597072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.597083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.597457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.597468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.597841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.597851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.598046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.598057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.598764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.598786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.599147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.599158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.599534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.599545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.599896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.599907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.600263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.600273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.732 qpair failed and we were unable to recover it. 00:39:15.732 [2024-07-22 19:43:34.600627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.732 [2024-07-22 19:43:34.600638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.601016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.601026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.601381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.601392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.601768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.601778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.601979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.601991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.602370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.602381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.602565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.602576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.602928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.602938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.603338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.603349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.603687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.603698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.604072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.604082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.604406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.604417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.604773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.604783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.605167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.605177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.605392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.605403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.605681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.605692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.606017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.606030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.606373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.606384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.606628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.606639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.606998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.607008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.607394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.607404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.607783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.607794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.608174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.608184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.608393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.608404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.608760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.608772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.609155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.609165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.609533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.609544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.609912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.609923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.610323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.610334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.610598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.610608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.610963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.610973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.611292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.611303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.611629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.611639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.611990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.612000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.612358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.612370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.612678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.612689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.613051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.613061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.613424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.613435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.613779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.613790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.614090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.614101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.614460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.614472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.614849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.614859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.615103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.615113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.615443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.615454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.615782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.615793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.616152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.616161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.616508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.616519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.616880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.616890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.617288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.617299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.617671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.733 [2024-07-22 19:43:34.617682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.733 qpair failed and we were unable to recover it. 00:39:15.733 [2024-07-22 19:43:34.618046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.734 [2024-07-22 19:43:34.618057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.734 qpair failed and we were unable to recover it. 00:39:15.734 [2024-07-22 19:43:34.618408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.734 [2024-07-22 19:43:34.618419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.734 qpair failed and we were unable to recover it. 00:39:15.734 [2024-07-22 19:43:34.618687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.734 [2024-07-22 19:43:34.618697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.734 qpair failed and we were unable to recover it. 00:39:15.734 [2024-07-22 19:43:34.619047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.734 [2024-07-22 19:43:34.619057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.734 qpair failed and we were unable to recover it. 00:39:15.734 [2024-07-22 19:43:34.619392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.734 [2024-07-22 19:43:34.619407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.734 qpair failed and we were unable to recover it. 00:39:15.734 [2024-07-22 19:43:34.619759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.734 [2024-07-22 19:43:34.619769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.734 qpair failed and we were unable to recover it. 00:39:15.734 [2024-07-22 19:43:34.620123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.734 [2024-07-22 19:43:34.620135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.734 qpair failed and we were unable to recover it. 00:39:15.734 [2024-07-22 19:43:34.620492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.734 [2024-07-22 19:43:34.620504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.734 qpair failed and we were unable to recover it. 00:39:15.734 [2024-07-22 19:43:34.620844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.735 [2024-07-22 19:43:34.620854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.735 qpair failed and we were unable to recover it. 00:39:15.735 [2024-07-22 19:43:34.621277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.735 [2024-07-22 19:43:34.621287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.735 qpair failed and we were unable to recover it. 00:39:15.735 [2024-07-22 19:43:34.621637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.735 [2024-07-22 19:43:34.621648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.621961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.621970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.622289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.622300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.622667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.622682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.623037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.623047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.623381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.623392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.623748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.623758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.624110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.624121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.624478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.624489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.624708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.624719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.625082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.625093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.625451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.625462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.625813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.625824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.626204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.626216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.626560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.626572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.626800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.626812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.627181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.627193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.627366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.627379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.627704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.627715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.628015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.628027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.628385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.628398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.628780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.628791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.629145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.629156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.629404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.629416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.629767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.629778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.630035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.630046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.630474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.630485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.630835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.630846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.631112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.631123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.631455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.631467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.631824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.631835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.632196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.632212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.632563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.632574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.632944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.632955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:15.736 [2024-07-22 19:43:34.633123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:15.736 [2024-07-22 19:43:34.633136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:15.736 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.633531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.633545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.633903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.633917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.634274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.634286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.634623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.634634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.634991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.635002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.635278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.635290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.635659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.635671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.636028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.636040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.636404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.636416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.636793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.636805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.637157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.013 [2024-07-22 19:43:34.637168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.013 qpair failed and we were unable to recover it. 00:39:16.013 [2024-07-22 19:43:34.637331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.637344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.637717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.637728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.638086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.638097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.638356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.638368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.638630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.638642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.639003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.639014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.639289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.639301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.639653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.639664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.640024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.640035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.640289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.640300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.640666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.640677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.641052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.641064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.641260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.641276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.641647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.641659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.641999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.642010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.642460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.642472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.642820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.642832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.643192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.643211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.643439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.643450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.643782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.643793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.644147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.644158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.644572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.644583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.644927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.644938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.645362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.645372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.645748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.645758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.646019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.646029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.646290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.646301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.646544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.646554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.646928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.646938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.647295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.647306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.647687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.647700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.648050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.648061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.648417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.648428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.648778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.648788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.649163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.649173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.649540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.649551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.649926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.649936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.014 [2024-07-22 19:43:34.650345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.014 [2024-07-22 19:43:34.650356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.014 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.650734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.650745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.651123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.651133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.651342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.651353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.651681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.651692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.652051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.652062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.652336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.652348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.652567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.652580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.652939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.652948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.653230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.653241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.653686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.653695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.654080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.654090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.654459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.654469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.654809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.654819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.655174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.655183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.655546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.655556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.655883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.655892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.656265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.656275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.656627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.656638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.656987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.656997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.657323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.657333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.657682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.657692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.658056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.658065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.658343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.658352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.658677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.658686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.659023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.659034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.659390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.659401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.659757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.659766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.660102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.660111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.660336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.660346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.660758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.660767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.661104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.661114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.661499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.661508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.661861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.661872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.662204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.662213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.662573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.662583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.662829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.662838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.663193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.663207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.015 qpair failed and we were unable to recover it. 00:39:16.015 [2024-07-22 19:43:34.663555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.015 [2024-07-22 19:43:34.663565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.663948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.663961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.664317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.664326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.664760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.664769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.665127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.665136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.665491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.665501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.665849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.665858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.666101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.666110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.666468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.666477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.666810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.666819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.667174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.667183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.667540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.667550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.667855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.667864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.668240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.668249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.668597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.668607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.668858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.668867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.669221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.669230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.669605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.669613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.669945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.669956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.670310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.670320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.670692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.670701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.671079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.671088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.671500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.671510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.671893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.671904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.672260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.672270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.672616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.672625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.672830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.672840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.673245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.673254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.673617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.673625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.673969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.673978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.674273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.674282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.674638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.674648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.674998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.675008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.675376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.675386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.675594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.675604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.675959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.675971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.676397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.676407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.676740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.016 [2024-07-22 19:43:34.676750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.016 qpair failed and we were unable to recover it. 00:39:16.016 [2024-07-22 19:43:34.677106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.677116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.677335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.677345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.677683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.677692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.678065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.678074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.678418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.678428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.678787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.678796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.679207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.679217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.679524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.679534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.679892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.679901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.680232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.680242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.680585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.680595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.680930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.680943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.681295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.681305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.681636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.681645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.681968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.681978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.682353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.682363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.682718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.682728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.683083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.683092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.683281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.683291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.683537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.683547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.683896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.683905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.684321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.684330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.684692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.684701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.685044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.685053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.685384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.685394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.685782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.685792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.686123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.686132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.686496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.686516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.686854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.686863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.687196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.687210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.687695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.687704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.688089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.688099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.688471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.688481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.688812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.688821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.689179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.689188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.689556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.689565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.689897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.689906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.690260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.017 [2024-07-22 19:43:34.690272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.017 qpair failed and we were unable to recover it. 00:39:16.017 [2024-07-22 19:43:34.690630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.690640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.690998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.691007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.691339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.691349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.691674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.691683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.691926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.691935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.692161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.692171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.692381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.692391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.692610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.692619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.692825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.692834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.693164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.693174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.693507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.693517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.693868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.693877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.694229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.694239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.694438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.694447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.694757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.694766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.695128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.695138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.695378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.695389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.695746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.695755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.696093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.696103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.696407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.696417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.696756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.696765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.697138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.697147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.697533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.697543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.697855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.697865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.698224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.698233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.698586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.698595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.698982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.698992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.699356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.699366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.699722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.699731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.700101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.700110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.018 qpair failed and we were unable to recover it. 00:39:16.018 [2024-07-22 19:43:34.700491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.018 [2024-07-22 19:43:34.700501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.700834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.700843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.701154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.701166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.701574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.701584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.701970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.701979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.702344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.702354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.702662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.702671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.703013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.703022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.703380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.703390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.703751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.703762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.704107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.704116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.704457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.704467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.704826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.704835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.705165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.705174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.705530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.705539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.705813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.705822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.706176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.706186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.706462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.706471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.706911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.706920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.707267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.707276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.707506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.707516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.707876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.707887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.708277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.708287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.708657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.708670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.709027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.709036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.709367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.709377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.709731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.709740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.710073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.710082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.710414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.710424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.710779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.710788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.711122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.711131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.711489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.711498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.711835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.711844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.712195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.712208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.712552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.712562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.712998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.713008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.713339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.713349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.713698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.019 [2024-07-22 19:43:34.713707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.019 qpair failed and we were unable to recover it. 00:39:16.019 [2024-07-22 19:43:34.714043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.714052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.714403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.714412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.714767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.714776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.715109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.715118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.715569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.715578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.715909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.715918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.716310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.716326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.716694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.716704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.717041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.717050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.717434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.717443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.717796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.717806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.717987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.717999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.718321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.718331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.718713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.718722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.719052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.719061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.719413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.719424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.719640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.719650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.719974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.719984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.720395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.720404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.720768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.720783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.721135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.721144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.721486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.721496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.721864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.721874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.722227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.722237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.722679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.722688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.723030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.723039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.723387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.723397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.723772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.723783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.724210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.724220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.724541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.724558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.724910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.724919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.725250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.725260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.725601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.725610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.725943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.725952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.726293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.726303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.726699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.726708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.727038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.727047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.727399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.020 [2024-07-22 19:43:34.727408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.020 qpair failed and we were unable to recover it. 00:39:16.020 [2024-07-22 19:43:34.727770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.727779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.728161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.728170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.728547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.728558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.728934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.728944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.729296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.729306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.729529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.729538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.729892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.729901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.730120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.730130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.730485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.730495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.730824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.730833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.731187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.731196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.731495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.731508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.731872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.731881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.732217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.732231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.732599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.732608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.732977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.732987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.733312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.733322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.733675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.733684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.734020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.734029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.734381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.734390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.734747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.734756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.735137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.735146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.735500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.735510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.735881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.735891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.736225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.736234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.736599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.736608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.736962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.736971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.737375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.737384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.737715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.737726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.737944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.737953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.738335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.738345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.738723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.738732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.739132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.739141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.739481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.739491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.739743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.739752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.740095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.740104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.740464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.021 [2024-07-22 19:43:34.740474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.021 qpair failed and we were unable to recover it. 00:39:16.021 [2024-07-22 19:43:34.740838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.740847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.741204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.741213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.741572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.741581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.741976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.741986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.742354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.742363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.742769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.742779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.743151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.743160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.743506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.743515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.743861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.743871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.744237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.744247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.744618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.744627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.744982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.744991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.745343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.745352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.745691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.745699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.746050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.746059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.746345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.746354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.746714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.746727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.746945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.746955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.747317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.747326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.747692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.747701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.748073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.748082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.748391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.748400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.748732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.748742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.749076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.749085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.749266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.749276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.749585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.749594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.749893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.749902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.750280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.750289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.750496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.750505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.750831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.750841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.751214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.751223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.751579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.751588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.022 qpair failed and we were unable to recover it. 00:39:16.022 [2024-07-22 19:43:34.751919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.022 [2024-07-22 19:43:34.751928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.752294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.752303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.752669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.752678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.752934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.752944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.753299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.753309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.753646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.753655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.754006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.754018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.754448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.754458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.754787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.754796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.755104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.755113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.755537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.755546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.755886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.755895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.756273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.756282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.756629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.756638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.757053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.757062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.757397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.757407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.757759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.757769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.758099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.758109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.758439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.758449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.758805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.758814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.759147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.759156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.759519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.759534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.759785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.759795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.760122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.760131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.760462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.760471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.760668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.760679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.761005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.761014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.761420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.761429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.761767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.761776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.762134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.762143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.762470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.762480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.762688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.762698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.763026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.763035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.763374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.763384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.763574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.763584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.763919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.763928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.764282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.023 [2024-07-22 19:43:34.764292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.023 qpair failed and we were unable to recover it. 00:39:16.023 [2024-07-22 19:43:34.764501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.764511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.764680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.764689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.765008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.765018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.765350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.765360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.765712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.765721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.766061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.766071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.766414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.766424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.766642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.766653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.766990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.766999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.767354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.767363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.767719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.767729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.768061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.768070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.768404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.768414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.768793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.768803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.769176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.769189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.769565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.769575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.769929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.769939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.770283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.770293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.770656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.770666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.771025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.771034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.771369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.771379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.771752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.771761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.772094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.772103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.772478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.772488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.772828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.772837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.773042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.773051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.773409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.773419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.773773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.773783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.774112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.774122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.774465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.774475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.774829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.774838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.775161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.775170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.775556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.775566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.775944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.775969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.776304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.776313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.776528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.776537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.776928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.776938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.024 [2024-07-22 19:43:34.777146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.024 [2024-07-22 19:43:34.777155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.024 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.777515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.777525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.777768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.777778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.778090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.778099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.778482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.778493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.778847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.778857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.779117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.779125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.779476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.779485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.779724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.779734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.779984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.779994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.780387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.780397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.780727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.780735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.781056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.781065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.781407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.781417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.781780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.781790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.782137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.782146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.782496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.782506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.782845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.782857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.783218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.783228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.783586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.783595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.783872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.783881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.784236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.784245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.784580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.784598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.784951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.784960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.785295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.785305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.785670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.785680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.786019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.786028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.786359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.786369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.786716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.786726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.787100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.787109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.787463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.787473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.787867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.787876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.788153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.788163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.788530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.788540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.788788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.788798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.789234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.789243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.789629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.789638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.789984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.789993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.025 qpair failed and we were unable to recover it. 00:39:16.025 [2024-07-22 19:43:34.790357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.025 [2024-07-22 19:43:34.790368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.790821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.790830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.791179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.791188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.791573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.791583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.791781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.791792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.792031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.792042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.792237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.792247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.792629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.792638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.792974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.792983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.793363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.793373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.793595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.793604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.793968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.793977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.794313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.794323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.794659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.794668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.795034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.795043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.795387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.795397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.795616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.795625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.795955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.795965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.796160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.796169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.796585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.796596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.796925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.796934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.797298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.797308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.797685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.797698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.797958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.797967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.798316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.798326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.798639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.798648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.799013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.799022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.799216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.799226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.799453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.799463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.799824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.799833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.800077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.800087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.800464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.800474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.800842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.800852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.801209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.801219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.801395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.801403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.801762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.801772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.801997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.802006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.802360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.802371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.026 qpair failed and we were unable to recover it. 00:39:16.026 [2024-07-22 19:43:34.802732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.026 [2024-07-22 19:43:34.802741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.803079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.803088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.803419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.803428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.803783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.803792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.804189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.804203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.804579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.804589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.804936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.804945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.805275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.805285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.805634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.805643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.805974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.805983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.806345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.806355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.806712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.806721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.807023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.807032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.807295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.807304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.807651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.807660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.807956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.807965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.808348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.808358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.808691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.808701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.809055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.809064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.809331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.809341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.809665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.809674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.810017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.810029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.810385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.810395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.810768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.810777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.811109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.811118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.811468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.811478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.811808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.811817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.812173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.812182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.812603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.812612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.812944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.812954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.813275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.813284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.813667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.813676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.027 [2024-07-22 19:43:34.814025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.027 [2024-07-22 19:43:34.814035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.027 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.814233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.814243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.814470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.814479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.814671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.814680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.815031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.815041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.815379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.815390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.815589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.815598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.815930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.815939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.816315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.816325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.816555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.816565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.816917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.816926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.817258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.817268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.817657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.817666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.818013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.818022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.818279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.818289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.818519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.818530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.818872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.818885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.819240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.819249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.819579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.819588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.819939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.819949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.820282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.820292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.820659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.820668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.820890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.820900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.821243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.821252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.821636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.821646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.822004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.822013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.822359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.822370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.822723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.822733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.823066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.823075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.823391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.823403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.823664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.823674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.824021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.824032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.824415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.824425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.824756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.824765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.825253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.825262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.825592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.825601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.825962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.825971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.826329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.028 [2024-07-22 19:43:34.826339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.028 qpair failed and we were unable to recover it. 00:39:16.028 [2024-07-22 19:43:34.826696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.826706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.827068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.827078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.827456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.827466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.827803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.827812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.828165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.828175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.828527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.828536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.828906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.828916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.829274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.829284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.829652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.829661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.830014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.830023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.830418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.830427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.830837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.830847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.831055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.831065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.831333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.831342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.831584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.831594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.831960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.831970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.832300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.832310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.832551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.832560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.832894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.832904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.833268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.833278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.833641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.833651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.834004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.834013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.834371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.834381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.834730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.834739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.834962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.834971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.835212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.835222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.835569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.835578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.835932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.835941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.836279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.836289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.836645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.836654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.836986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.836995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.837335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.837347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.837713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.837723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.838070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.838080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.838430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.838439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.838769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.838779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.839138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.839148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.839355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.029 [2024-07-22 19:43:34.839365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.029 qpair failed and we were unable to recover it. 00:39:16.029 [2024-07-22 19:43:34.839749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.839759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.840107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.840117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.840377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.840387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.840678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.840688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.841035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.841049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.841448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.841457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.841787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.841797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.842033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.842042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.842370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.842380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.842760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.842770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.843126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.843134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.843427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.843437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.843800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.843810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.844134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.844144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.844526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.844537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.844785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.844795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.845048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.845058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.845411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.845420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.845750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.845759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.846129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.846138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.846495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.846505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.846840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.846849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.847316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.847326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.847688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.847698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.848066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.848075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.848407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.848417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.848678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.848687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.849029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.849038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.849399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.849408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.849771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.849780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.850113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.850122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.850410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.850421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.850755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.850764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.851124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.851136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.851394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.851404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.851745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.851754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.852108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.030 [2024-07-22 19:43:34.852118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.030 qpair failed and we were unable to recover it. 00:39:16.030 [2024-07-22 19:43:34.852488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.852498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.852833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.852843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.853196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.853218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.853540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.853550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.853885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.853895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.854291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.854302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.854480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.854490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.854885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.854895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.855256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.855266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.855654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.855665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.856012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.856021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.856952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.856973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.857310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.857322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.857685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.857695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.857928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.857938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.858288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.858298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.858674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.858683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.859030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.859039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.859390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.859400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.859750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.859759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.860090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.860100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.860479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.860489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.860858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.860867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.861246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.861256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.861609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.861618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.861958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.861967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.862285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.862295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.862685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.862694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.863024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.863033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.863290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.863300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.863661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.863671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.864002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.864015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.864389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.864399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.864733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.864742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.865086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.865095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.865474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.865483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.865570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.865581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.031 [2024-07-22 19:43:34.865840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.031 [2024-07-22 19:43:34.865850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.031 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.866212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.866221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.866538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.866547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.866735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.866744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.867107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.867116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.867487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.867496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.867828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.867837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.868199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.868212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.868575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.868584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.868926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.868936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.869132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.869142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.869376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.869385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.869673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.869682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.870034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.870044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.870401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.870411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.870744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.870753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.871122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.871131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.871393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.871403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.871760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.871777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.872150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.872159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.872501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.872510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.872868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.872878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.873190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.873199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.873533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.873543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.873916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.873925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.874115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.874125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.874474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.874484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.874814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.874824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.875188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.875197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.875453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.875463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.875802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.875812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.876017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.876026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.876292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.032 [2024-07-22 19:43:34.876301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.032 qpair failed and we were unable to recover it. 00:39:16.032 [2024-07-22 19:43:34.876544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.876553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.876837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.876847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.877191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.877204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.877570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.877579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.877955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.877964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.878278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.878287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.878635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.878646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.878977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.878986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.879349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.879360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.879733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.879742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.880119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.880129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.880479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.880489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.880770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.880779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.881134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.881143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.881525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.881535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.881869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.881878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.882246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.882256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.882635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.882644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.882851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.882861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.883218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.883228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.883594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.883604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.883958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.883968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.884346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.884356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.884694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.884703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.885068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.885077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.885417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.885430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.885801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.885810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.886142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.886152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.886482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.886492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.886888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.886897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.887104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.887113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.887482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.887492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.887848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.887858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.888215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.888225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.888548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.888558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.888748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.888758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.889044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.889053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.033 [2024-07-22 19:43:34.889272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.033 [2024-07-22 19:43:34.889282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.033 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.889651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.889660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.889989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.889998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.890335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.890344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.890715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.890724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.891062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.891072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.891304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.891314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.891689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.891698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.892030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.892040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.892419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.892431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.892763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.892772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.893165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.893175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.893541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.893551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.893881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.893889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.894249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.894258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.894505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.894515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.894851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.894861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.895221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.895231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.895664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.895673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.896009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.896018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.896369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.896378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.896737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.896746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.897076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.897085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.897418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.897428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.897782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.897791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.898168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.898178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.898547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.898557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.899012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.899021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.899461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.899471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.899818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.899828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.900257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.900267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.900583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.900592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.900956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.900964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.901335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.901345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.901703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.901713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.902080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.902089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.902432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.902442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.902801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.902811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.034 [2024-07-22 19:43:34.903163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.034 [2024-07-22 19:43:34.903172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.034 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.903432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.903441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.903635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.903646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.903976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.903986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.904341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.904350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.904697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.904707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.904900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.904910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.905281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.905290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.905647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.905656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.905837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.905846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.906249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.906258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.906617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.906628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.906804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.906814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.907079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.907088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.907440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.907453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.907827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.907836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.908166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.908175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.908518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.908527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.908780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.908789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.909166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.909176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.909521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.909530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.909901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.909911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.910325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.910335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.910701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.910711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.911070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.911079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.911415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.911425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.911777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.911786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.912118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.912127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.912493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.912503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.912898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.912907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.913271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.913280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.913621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.913630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.913981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.913990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.914347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.914357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.914720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.914730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.915108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.915117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.915312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.915321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.915711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.035 [2024-07-22 19:43:34.915720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.035 qpair failed and we were unable to recover it. 00:39:16.035 [2024-07-22 19:43:34.916048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.916058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.916438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.916448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.916780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.916789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.917153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.917162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.917527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.917536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.917866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.917877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.918236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.918246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.918434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.918443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.918780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.918789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.919161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.919170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.919567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.919577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.919774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.919783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.920103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.920112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.920349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.920360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.920695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.920705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.921059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.921068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.921316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.921325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.921672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.921682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.922012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.922022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.922394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.922404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.922751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.922760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.923098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.923108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.923433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.923443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.923796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.923805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.924134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.924143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.924484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.924494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.924860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.924869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.925207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.925217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.925565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.925574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.925949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.925959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.926313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.926323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.926702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.926711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.036 qpair failed and we were unable to recover it. 00:39:16.036 [2024-07-22 19:43:34.927056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.036 [2024-07-22 19:43:34.927065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.927415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.927424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.927792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.927802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.928155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.928164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.928552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.928562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.928972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.928982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.929471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.929504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.929893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.929905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.930157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.930171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.930564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.930574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.930907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.930916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.931287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.931296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.931667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.931676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.932103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.932113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.932489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.932499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.932926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.932936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.933130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.933141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.933469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.933479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.933806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.933816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.934171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.934181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.934526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.934536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.934944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.934956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.935382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.935391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.935720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.935729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.936134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.936144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.936597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.936606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.936951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.936960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.937323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.937333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.937688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.937698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.938122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.938131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.938491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.938501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.938854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.938863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.939217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.939227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.939662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.939671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.940005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.940014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.940373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.940383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.940759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.940768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.940987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.037 [2024-07-22 19:43:34.940996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.037 qpair failed and we were unable to recover it. 00:39:16.037 [2024-07-22 19:43:34.941344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.941354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.941594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.941603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.941979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.941989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.942365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.942374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.942738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.942748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.943102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.943111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.943461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.943470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.943827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.943836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.944187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.944196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.944554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.944563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.944921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.944932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.945302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.945312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.945665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.945674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.945869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.945879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.946215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.946225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.946593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.946603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.946931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.946941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.947296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.947306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.947495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.947506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.947682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.947691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.947899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.947909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.948276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.948287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.948721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.948730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.949119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.949129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.949565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.949575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.949905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.949914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.950168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.950177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.950591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.950601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.950973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.950983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.951339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.951348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.038 [2024-07-22 19:43:34.951711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.038 [2024-07-22 19:43:34.951721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.038 qpair failed and we were unable to recover it. 00:39:16.313 [2024-07-22 19:43:34.952095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.313 [2024-07-22 19:43:34.952106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.313 qpair failed and we were unable to recover it. 00:39:16.313 [2024-07-22 19:43:34.952502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.313 [2024-07-22 19:43:34.952513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.313 qpair failed and we were unable to recover it. 00:39:16.313 [2024-07-22 19:43:34.952847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.313 [2024-07-22 19:43:34.952859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.313 qpair failed and we were unable to recover it. 00:39:16.313 [2024-07-22 19:43:34.953211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.313 [2024-07-22 19:43:34.953226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.313 qpair failed and we were unable to recover it. 00:39:16.313 [2024-07-22 19:43:34.953554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.313 [2024-07-22 19:43:34.953566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.313 qpair failed and we were unable to recover it. 00:39:16.313 [2024-07-22 19:43:34.953964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.313 [2024-07-22 19:43:34.953974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.313 qpair failed and we were unable to recover it. 00:39:16.313 [2024-07-22 19:43:34.954303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.313 [2024-07-22 19:43:34.954313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.313 qpair failed and we were unable to recover it. 00:39:16.313 [2024-07-22 19:43:34.954710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.954719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.955055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.955064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.955414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.955424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.955781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.955790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.956129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.956138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.956460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.956470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.956822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.956831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.957130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.957138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.957498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.957508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.957844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.957853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.958215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.958225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.958480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.958489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.958927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.958939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.959198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.959219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.959607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.959617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.959867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.959876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.960235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.960244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.960644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.960654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.961018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.961027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.961381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.961391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.961721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.961730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.962080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.962089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.962459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.962469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.962847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.962856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.963014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.963023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.963361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.963371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.963744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.963753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.964051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.964060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.964514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.964524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.964861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.964871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.965221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.965230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.965591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.965600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.966069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.966078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.966409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.966419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.314 qpair failed and we were unable to recover it. 00:39:16.314 [2024-07-22 19:43:34.966769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.314 [2024-07-22 19:43:34.966778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.967108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.967118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.967486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.967495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.967830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.967840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.968197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.968210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.968446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.968455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.968792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.968801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.969148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.969157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.969552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.969562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.969896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.969905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.970184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.970193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.970530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.970539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.970871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.970880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.971233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.971243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.971598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.971608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.971987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.971995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.972328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.972338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.972701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.972710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.973114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.973125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.973395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.973405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.973721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.973730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.974061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.974070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.974413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.974422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.974640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.974650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.974984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.974993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.975341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.975351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.975694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.975706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.976061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.976070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.976413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.976423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.976751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.976760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.977117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.977135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.977395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.977404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.977779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.977789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.978057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.978067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.978404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.978414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.978758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.978767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.979098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.979109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.315 [2024-07-22 19:43:34.979512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.315 [2024-07-22 19:43:34.979522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.315 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.979851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.979860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.980287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.980297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.980660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.980669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.980903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.980912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.981270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.981280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.981634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.981643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.981997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.982013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.982367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.982377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.982605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.982615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.982951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.982962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.983317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.983327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.983594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.983604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.983968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.983976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.984315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.984325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.984682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.984691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.985032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.985041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.985387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.985397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.985791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.985800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.985984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.985994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.986369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.986378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.986737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.986749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.987013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.987022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.987448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.987458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.987794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.987803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.988172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.988182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.988571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.988581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.988757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.988767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.989174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.989183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.989572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.989582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.989941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.989952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.990171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.990180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.990525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.990535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.990797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.990806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.991166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.991176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.991527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.991536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.991826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.991836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.992205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.992216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.316 [2024-07-22 19:43:34.992571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.316 [2024-07-22 19:43:34.992579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.316 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.992865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.992875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.993230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.993240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.993597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.993606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.993810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.993820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.994080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.994089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.994354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.994363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.994725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.994734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.995091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.995101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.995320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.995329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.995680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.995689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.995929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.995938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.996284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.996295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.996656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.996665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.996994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.997003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.997351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.997364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.997721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.997730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.997993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.998002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.998340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.998349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.998722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.998732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.999104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.999113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.999548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.999558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:34.999746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:34.999755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.000013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.000024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.000364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.000373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.000572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.000581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.000836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.000846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.001064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.001074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.001310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.001319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.001707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.001716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.002049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.002058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.002413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.002422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.002735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.002744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.003114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.003123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.003530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.003540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.003746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.003755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.004102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.004111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.004468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.317 [2024-07-22 19:43:35.004477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.317 qpair failed and we were unable to recover it. 00:39:16.317 [2024-07-22 19:43:35.004798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.004807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.005162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.005170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.005503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.005512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.005865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.005875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.006231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.006241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.006601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.006611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.006982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.006991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.007249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.007258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.007577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.007586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.007935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.007945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.008320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.008330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.008711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.008720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.009077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.009093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.009348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.009357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.009813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.009822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.010149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.010158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.010522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.010531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.010887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.010898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.011082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.011092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.011445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.011454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.011803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.011814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.012154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.012164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.012507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.012516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.012866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.012883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.013311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.013321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.013681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.013694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.014070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.014079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.014418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.014428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.014624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.014633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.015018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.318 [2024-07-22 19:43:35.015028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.318 qpair failed and we were unable to recover it. 00:39:16.318 [2024-07-22 19:43:35.015290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.015300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.015674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.015683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.016021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.016030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.016379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.016389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.016762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.016771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.017104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.017113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.017446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.017455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.017811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.017820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.018151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.018159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.018514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.018523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.018879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.018889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.019310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.019323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.019648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.019657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.020011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.020020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.020350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.020360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.020722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.020731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.021064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.021073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.021416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.021426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.021615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.021624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.021947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.021956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.022332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.022342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.022688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.022698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.022921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.022931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.023296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.023306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.023498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.023509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.023874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.023883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.024289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.024299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.024638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.024647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.025002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.025011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.025364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.025374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.025538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.025548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.025956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.025966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.026380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.026391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.026689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.026699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.027051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.027061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.027341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.027353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.027703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.027713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.028087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.319 [2024-07-22 19:43:35.028096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.319 qpair failed and we were unable to recover it. 00:39:16.319 [2024-07-22 19:43:35.028446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.028455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.028814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.028823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.029214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.029224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.029588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.029597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.029910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.029920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.030275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.030285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.030663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.030673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.031068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.031077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.031413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.031423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.031776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.031785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.032185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.032194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.032605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.032615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.032965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.032974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.033425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.033435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.033789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.033798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.034150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.034159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.034505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.034515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.034891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.034901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.035284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.035294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.035628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.035637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.035834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.035843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.036183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.036192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.036552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.036561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.036907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.036917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.037271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.037280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.037642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.037651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.038058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.038067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.038416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.038425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.038646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.038655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.038974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.038984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.039172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.039181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.039543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.039553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.039890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.039899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.040141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.040151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.040345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.040354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.040733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.040742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.041135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.041144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.320 [2024-07-22 19:43:35.041498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.320 [2024-07-22 19:43:35.041513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.320 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.041713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.041723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.042077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.042086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.042406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.042416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.042775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.042785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.043116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.043125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.043337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.043347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.043745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.043755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.043949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.043958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.044328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.044337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.044669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.044678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.045033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.045042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.045401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.045410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.045787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.045795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.046125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.046136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.046491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.046501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.046805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.046815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.047189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.047198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.047541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.047550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.047901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.047918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.048272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.048282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.048621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.048630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.048981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.048990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.049324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.049334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.049690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.049699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.049879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.049889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.050155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.050163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.050521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.050531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.050933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.050943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.051139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.051150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.051502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.051513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.051859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.051869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.052216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.052225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.052577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.052586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.052915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.052926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.053281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.053291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.053619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.053637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.054015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.054024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.321 qpair failed and we were unable to recover it. 00:39:16.321 [2024-07-22 19:43:35.054357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.321 [2024-07-22 19:43:35.054366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.054720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.054729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.055066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.055091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.055470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.055481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.055839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.055848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.056110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.056119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.056526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.056536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.056747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.056756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.057111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.057120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.057459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.057468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.057718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.057727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.058105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.058114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.058437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.058447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.058844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.058853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.059223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.059233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.059608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.059618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.059950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.059959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.060312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.060321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.060690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.060699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.061077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.061085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.061426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.061436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.061835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.061845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.062191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.062214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.062535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.062544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.062897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.062906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.063311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.063321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.063750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.063762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.064100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.064109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.064436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.064445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.064665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.064675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.065056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.065066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.065272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.065283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.065619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.065629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.065982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.065993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.066360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.066369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.066612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.066622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.066974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.066982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.067320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.322 [2024-07-22 19:43:35.067331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.322 qpair failed and we were unable to recover it. 00:39:16.322 [2024-07-22 19:43:35.067695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.067704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.068035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.068045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.068398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.068408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.068748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.068758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.069078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.069090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.069229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.069239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.069666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.069675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.070048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.070058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.070400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.070410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.070785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.070795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.071122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.071131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.071499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.071509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.071861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.071870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.072079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.072088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.072343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.072352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.072706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.072715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.072905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.072914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.073255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.073265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.073600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.073609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.073966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.073976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.074315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.074325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.074682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.074692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.075044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.075054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.075382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.075392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.075648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.075657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.076031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.076041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.076399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.076408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.076777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.076786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.077118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.077127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.077472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.077481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.077701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.077710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.323 [2024-07-22 19:43:35.078058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.323 [2024-07-22 19:43:35.078068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.323 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.078482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.078492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.078734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.078743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.079105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.079114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.079462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.079471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.079848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.079865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.080236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.080246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.080560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.080569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.080807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.080816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.081165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.081175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.081555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.081564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.081898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.081907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.082263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.082272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.082629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.082640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.082871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.082881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.083226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.083236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.083601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.083610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.083958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.083968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.084356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.084365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.084748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.084757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.085087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.085096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.085422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.085435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.085699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.085708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.086058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.086067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.086413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.086422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.086783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.086793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.087148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.087158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.087519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.087529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.087882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.087891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.088246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.088256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.088605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.088615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.088795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.088805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.089176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.089185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.089518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.089527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.089760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.089769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.090120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.090128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.090316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.090326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.090696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.090705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.324 qpair failed and we were unable to recover it. 00:39:16.324 [2024-07-22 19:43:35.091038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.324 [2024-07-22 19:43:35.091048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.091370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.091380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.091712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.091721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.092038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.092047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.092268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.092279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.092639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.092648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.092981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.092991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.093341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.093351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.093702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.093712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.094066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.094075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.094410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.094420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.094782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.094791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.095130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.095139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.095471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.095481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.095829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.095838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.096182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.096193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.096581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.096591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.096937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.096946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.097280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.097291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.097718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.097727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.098076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.098085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.098423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.098432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.098787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.098798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.099134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.099142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.099477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.099486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.099842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.099851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.100196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.100213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.100571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.100580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.100913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.100921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.101325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.101335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.101536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.101546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.101917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.101927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.102309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.102318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.102763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.102772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.103138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.103155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.325 [2024-07-22 19:43:35.103521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.325 [2024-07-22 19:43:35.103531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.325 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.103860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.103870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.104233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.104242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.104677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.104687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.105021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.105030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.105452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.105462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.105814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.105823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.106228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.106237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.106622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.106631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.106983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.106992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.107359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.107369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.107706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.107715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.107922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.107935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.108284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.108295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.108663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.108672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.109017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.109026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.109373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.109383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.109715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.109725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.109980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.109989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.110171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.110181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.110540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.110551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.110923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.110933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.111293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.111302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.111637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.111646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.112014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.112023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.112399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.112408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.112739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.112748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.113108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.113117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.113489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.113498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.113871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.113880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.114212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.114221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.114556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.114565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.114919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.114928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.115292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.115302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.115664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.115673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.326 [2024-07-22 19:43:35.116006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.326 [2024-07-22 19:43:35.116015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.326 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.116378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.116388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.116764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.116773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.117118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.117127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.117545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.117555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.117917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.117926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.118266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.118276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.118651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.118660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.119032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.119042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.119443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.119453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.119786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.119795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.120150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.120159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.120409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.120420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.120780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.120789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.121123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.121132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.121492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.121501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.121864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.121872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.122063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.122072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.122481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.122491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.122723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.122732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.123086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.123095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.123284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.123293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.123624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.123633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.123987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.123996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.124357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.124367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.124717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.124727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.124941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.124950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.125268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.125277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.125635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.125644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.125988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.125997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.126182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.126190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.126543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.327 [2024-07-22 19:43:35.126552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.327 qpair failed and we were unable to recover it. 00:39:16.327 [2024-07-22 19:43:35.126919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.126929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.127187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.127197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.127550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.127560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.127929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.127939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.128293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.128302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.128655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.128664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.129036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.129045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.129404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.129414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.129752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.129761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.130115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.130138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.130490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.130500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.130863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.130872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.131276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.131286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.131613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.131623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.131962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.131971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.132302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.132312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.132686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.132695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.133028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.133037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.133216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.133227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.133591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.133600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.133950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.133961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.134319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.134329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.134719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.134728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.135118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.135127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.135459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.135468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.135827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.135837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.136211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.136222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.136586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.136595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.136974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.136983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.137180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.137190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.137593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.137603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.137937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.137947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.328 qpair failed and we were unable to recover it. 00:39:16.328 [2024-07-22 19:43:35.138312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.328 [2024-07-22 19:43:35.138322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.138686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.138695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.138879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.138890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.139145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.139154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.139323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.139334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.139676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.139686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.140057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.140066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.140262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.140272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.140619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.140629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.140968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.140977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.141336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.141346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.141595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.141604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.141976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.141985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.142238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.142247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.142581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.142590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.142921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.142931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.143302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.143316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.143683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.143692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.144074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.144083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.144461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.144472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.144698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.144708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.145064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.145074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.145451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.145461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.145813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.145822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.146180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.146189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.146544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.146553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.146886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.146895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.147257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.147266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.147624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.147635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.147963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.147972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.148333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.148342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.148608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.148617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.148970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.148978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.149312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.149321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.149572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.149582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.149826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.149835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.329 [2024-07-22 19:43:35.150153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.329 [2024-07-22 19:43:35.150162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.329 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.150419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.150428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.150785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.150794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.151083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.151093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.151470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.151480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.151819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.151832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.152205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.152215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.152569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.152578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.152785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.152795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.153045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.153054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.153420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.153430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.153785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.153794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.154032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.154042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.154390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.154401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.154785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.154794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.155123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.155133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.155462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.155471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.155804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.155812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.156161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.156170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.156518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.156528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.156858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.156867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.157173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.157182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.157545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.157554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.157930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.157939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.158249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.158258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.158620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.158630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.158970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.158979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.159344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.159354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.159719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.159728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.160083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.160093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.160307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.160317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.160520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.160529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.160856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.160868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.161300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.330 [2024-07-22 19:43:35.161310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.330 qpair failed and we were unable to recover it. 00:39:16.330 [2024-07-22 19:43:35.161631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.161640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.161859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.161868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.162232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.162241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.162593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.162602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.162838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.162848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.163093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.163102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.163520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.163530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.163780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.163789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.164138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.164147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.164585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.164595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.165062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.165072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.165405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.165414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.165800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.165809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.166142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.166151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.166525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.166534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.166888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.166897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.167229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.167239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.167510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.167519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.167859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.167868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.168214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.168223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.168578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.168587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.168771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.168782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.169151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.169162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.169508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.169518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.169702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.169712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.169978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.169988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.170210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.170221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.170448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.170458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.170727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.170737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.170931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.170941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.171262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.171272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.171471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.171481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.171844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.171854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.172105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.172115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.172504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.172515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.172844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.172856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.173218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.173227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.173581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.331 [2024-07-22 19:43:35.173590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.331 qpair failed and we were unable to recover it. 00:39:16.331 [2024-07-22 19:43:35.173920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.173932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.174295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.174304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.174674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.174683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.175019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.175028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.175254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.175263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.175687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.175696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.176034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.176044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.176399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.176409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.176756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.176765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.177120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.177129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.177560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.177570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.177899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.177909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.178264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.178274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.178503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.178512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.178868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.178877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.179224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.179234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.179573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.179582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.180014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.180023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.180359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.180368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.180744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.180753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.181089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.181098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.181287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.181297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.181598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.181607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.181858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.181867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.182225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.182235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.182584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.182593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.182964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.182973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.183240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.183249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.332 [2024-07-22 19:43:35.183618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.332 [2024-07-22 19:43:35.183627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.332 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.183958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.183967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.184333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.184342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.184693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.184702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.184991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.185000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.185372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.185382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.185714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.185724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.186079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.186087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.186498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.186507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.186857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.186866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.187180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.187190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.187543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.187553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.187882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.187894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.188207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.188217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.188575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.188584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.188986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.188995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.189358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.189368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.189619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.189628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.190015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.190025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.190341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.190350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.190701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.190711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.191111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.191120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.191473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.191482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.191840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.191848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.192210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.192220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.192541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.192559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.192937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.192946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.193285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.193295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.193718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.193727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.194083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.194092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.194468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.194477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.194791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.194801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.195115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.195124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.195481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.195496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.195834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.195842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.196057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.333 [2024-07-22 19:43:35.196066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.333 qpair failed and we were unable to recover it. 00:39:16.333 [2024-07-22 19:43:35.196416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.196426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.196752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.196762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.197134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.197143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.197505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.197515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.197879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.197888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.198243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.198253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.198619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.198628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.198981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.198991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.199320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.199330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.199683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.199691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.200028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.200036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.200401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.200411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.200809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.200818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.201148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.201158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.201396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.201405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.201829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.201837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.202219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.202230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.202597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.202607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.202931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.202940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.203272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.203282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.203635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.203645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.203972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.203982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.204243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.204253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.204672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.204681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.205015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.205025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.205380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.205390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.205575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.205585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.205902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.205911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.206250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.206260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.206634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.206643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.207023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.207032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.207401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.207411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.207575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.207584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.207932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.207942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.208282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.208292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.208650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.208659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.208989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.208999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.209433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.334 [2024-07-22 19:43:35.209443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.334 qpair failed and we were unable to recover it. 00:39:16.334 [2024-07-22 19:43:35.209785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.209794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.210147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.210157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.210518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.210527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.210716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.210726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.210921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.210930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.211320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.211329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.211661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.211670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.212049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.212058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.212472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.212482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.212844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.212853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.213208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.213219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.213386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.213396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.213769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.213779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.214133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.214142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.214384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.214395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.214781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.214790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.215130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.215140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.215321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.215332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.215751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.215768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.216098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.216107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.216536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.216546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.216877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.216886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.217238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.217248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.217605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.217617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.217949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.217959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.218312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.218321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.218682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.218691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.219025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.219035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.219385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.219395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.219761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.219770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.220107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.220116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.220438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.220448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.220796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.220806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.221186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.221196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.221579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.221590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.221923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.221933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.222315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.222325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.222588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.335 [2024-07-22 19:43:35.222597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.335 qpair failed and we were unable to recover it. 00:39:16.335 [2024-07-22 19:43:35.222969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.222978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.223193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.223205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.223537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.223548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.223851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.223861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.224212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.224222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.224517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.224527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.224867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.224876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.225225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.225235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.225599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.225608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.225831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.225840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.226198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.226211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.226478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.226488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.226843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.226853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.227111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.227120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.227487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.227497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.227854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.227864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.228084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.228093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.228436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.228446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.228795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.228804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.229163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.229172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.229506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.229517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.229889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.229899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.230119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.230129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.230495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.230505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.230739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.230748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.231106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.231116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.231465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.231475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.231718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.231728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.232084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.232093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.232470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.232480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.232865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.232875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.233229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.233239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.233595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.233605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.233944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.233953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.234300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.234309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.234672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.234680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.234978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.234988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.336 [2024-07-22 19:43:35.235343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.336 [2024-07-22 19:43:35.235353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.336 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.235715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.235725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.236076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.236085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.236418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.236427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.236783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.236792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.237169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.237178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.237525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.237534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.237891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.237900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.238255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.238265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.238616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.238625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.238973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.238982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.239334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.239344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.239605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.239618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.239995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.240005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.240226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.240236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.240493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.240502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.240841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.240850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.241187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.241196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.241458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.241467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.241797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.241807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.242170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.242179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.242531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.242541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.242887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.242896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.243249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.243260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.243602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.243611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.243793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.243803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.244138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.244147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.244585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.244595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.244979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.244988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.245356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.245365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.245606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.245616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.246041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.246051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.337 [2024-07-22 19:43:35.246388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.337 [2024-07-22 19:43:35.246398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.337 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.246599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.246609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.246948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.246957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.247211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.247221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.247577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.247587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.247961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.247970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.248301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.248311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.248688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.248698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.249036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.249045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.249417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.249427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.249792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.249801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.250132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.250141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.250501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.250516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.250895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.250905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.251237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.251247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.251501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.251510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.251918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.251927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.338 [2024-07-22 19:43:35.252274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.338 [2024-07-22 19:43:35.252283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.338 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.252649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.252659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.253020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.253030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.253293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.253303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.253634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.253643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.253971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.253980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.254336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.254346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.254700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.254709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.255078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.255089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.255460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.255470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.255827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.255837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.256217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.256227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.256613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.256622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.256951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.256960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.257314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.257325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.257688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.257698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.258031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.258040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.258392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.611 [2024-07-22 19:43:35.258402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.611 qpair failed and we were unable to recover it. 00:39:16.611 [2024-07-22 19:43:35.258760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.258769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.259099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.259108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.259456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.259465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.259827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.259836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.260048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.260057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.260418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.260428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.260804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.260813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.261219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.261229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.261667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.261677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.261860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.261875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.262159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.262168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.262357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.262367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.262726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.262735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.263066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.263075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.263415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.263425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.263800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.263810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.264185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.264195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.264527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.264537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.264889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.264898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.265267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.265276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.265629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.265638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.265955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.265964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.266337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.266347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.266696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.266706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.267062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.267080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.267499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.267509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.267841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.267850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.268207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.268216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.268548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.268557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.268799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.268808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.269152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.269161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.269508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.269517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.269889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.269898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.270250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.270260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.270717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.270727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.270916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.270926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.271168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.271179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.612 [2024-07-22 19:43:35.271392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.612 [2024-07-22 19:43:35.271402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.612 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.271662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.271671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.272024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.272033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.272291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.272300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.272647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.272656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.272987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.272996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.273248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.273257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.273453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.273463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.273829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.273838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.274232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.274242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.274523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.274532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.274860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.274869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.275233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.275242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.275649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.275659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.276012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.276021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.276350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.276359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.276587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.276596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.276949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.276958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.277315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.277325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.277657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.277667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.278066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.278076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.278454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.278464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.278793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.278803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.279161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.279170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.279574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.279583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.279964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.279974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.280327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.280337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.280699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.280709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.281063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.281072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.281259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.281270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.281651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.281660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.282036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.282045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.282453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.282462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.282817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.282827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.283180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.283189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.283540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.283549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.283897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.283910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.284283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.284292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.613 qpair failed and we were unable to recover it. 00:39:16.613 [2024-07-22 19:43:35.284472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.613 [2024-07-22 19:43:35.284483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.284852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.284863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.285196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.285209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.285537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.285546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.285887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.285896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.286280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.286290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.286661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.286671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.287002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.287012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.287379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.287389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.287648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.287657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.288026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.288034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.288370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.288380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.288747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.288756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.289126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.289135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.289497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.289507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.289771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.289780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.290130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.290139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.290503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.290513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.290891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.290901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.291232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.291241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.291609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.291619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.291970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.291978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.292310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.292320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.292671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.292680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.293012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.293021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.293386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.293396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.293808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.293817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.294159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.294169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.294531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.294542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.294920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.294930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.295285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.295294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.295661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.295671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.296022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.614 [2024-07-22 19:43:35.296031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.614 qpair failed and we were unable to recover it. 00:39:16.614 [2024-07-22 19:43:35.296363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.296372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.296724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.296733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.297069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.297078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.297255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.297266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.297602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.297611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.297964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.297974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.298323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.298333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.298663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.298672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.299027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.299036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.299395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.299405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.299823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.299833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.300163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.300172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.300515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.300525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.300855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.300864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.301233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.301243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.301599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.301608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.301941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.301950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.302351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.302361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.302541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.302551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.302848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.302857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.303212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.303222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.303560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.303569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.303918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.303928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.304281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.304292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.304669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.304679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.304980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.304989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.305357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.305367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.305728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.305738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.306092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.306102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.306292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.306305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.306629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.306638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.306982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.306991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.307314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.307323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.307534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.307543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.307901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.307911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.308109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.308120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.308493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.615 [2024-07-22 19:43:35.308503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.615 qpair failed and we were unable to recover it. 00:39:16.615 [2024-07-22 19:43:35.308856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.308865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.309073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.309083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.309460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.309470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.309713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.309721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.309922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.309931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.310351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.310360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.310603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.310612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.310958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.310966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.311407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.311416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.311789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.311798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.312041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.312051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.312249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.312259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.312637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.312646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.312704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.312712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.313038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.313047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.313404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.313413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.313789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.313798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.314132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.314141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.314490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.314500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.314758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.314767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.314890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.314900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.315255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.315264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.315585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.315594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.315975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.315984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.316315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.316324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.316677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.316686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.317019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.317028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.317306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.317315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.317691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.317701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.318035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.616 [2024-07-22 19:43:35.318045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.616 qpair failed and we were unable to recover it. 00:39:16.616 [2024-07-22 19:43:35.318413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.318422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.318811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.318820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.319170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.319180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.319376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.319387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.319829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.319838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.320224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.320233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.320661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.320670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.321054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.321063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.321415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.321426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.321804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.321813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.322140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.322149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.322364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.322374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.322792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.322801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.323132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.323141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.323508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.323517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.323844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.323853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.324178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.324187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.324565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.324574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.324905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.324914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.325263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.325272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.325640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.325649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.326028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.326037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.326502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.326512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.326865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.326874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.327324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.327334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.327656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.327665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.328037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.328049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.328379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.328389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.328744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.328753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.328987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.328996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.329350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.329359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.329711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.329720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.330076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.330085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.330493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.330502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.330858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.330867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.331228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.331237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.331590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.331599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.617 qpair failed and we were unable to recover it. 00:39:16.617 [2024-07-22 19:43:35.331952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.617 [2024-07-22 19:43:35.331961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.332313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.332322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.332698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.332706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.333047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.333056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.333517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.333526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.333858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.333866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.334217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.334226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.334433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.334443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.334784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.334794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.334990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.335000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.335334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.335344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.335536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.335548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.335892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.335901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.336245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.336255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.336438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.336448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.336816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.336825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.337172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.337181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.337532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.337541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.337915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.337924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.338257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.338266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.338652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.338661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.338988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.338998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.339360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.339369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.339726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.339735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.340107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.340116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.340404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.340414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.340766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.340775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.341233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.341243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.341582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.341591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.341947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.341956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.342292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.342302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.342497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.342506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.342925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.342934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.343311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.343321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.343611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.343621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.343824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.343833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.344107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.344117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.344495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.618 [2024-07-22 19:43:35.344505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.618 qpair failed and we were unable to recover it. 00:39:16.618 [2024-07-22 19:43:35.344835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.344844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.345211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.345220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.345570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.345579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.345779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.345789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.346144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.346153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.346490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.346499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.346876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.346885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.347216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.347225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.347503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.347512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.347871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.347880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.348279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.348288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.348625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.348635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.348992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.349001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.349210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.349222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.349609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.349618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.349949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.349961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.350351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.350361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.350715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.350724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.351067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.351077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.351348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.351358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.351564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.351573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.351947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.351956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.352330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.352339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.352708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.352718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.353068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.353078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.353459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.353468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.353838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.353848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.354235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.354245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.354623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.354632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.354997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.355005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.355350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.355359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.355737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.355746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.356088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.356096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.356457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.356467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.356851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.356861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.357215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.357224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.357584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.357593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.357946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.619 [2024-07-22 19:43:35.357954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.619 qpair failed and we were unable to recover it. 00:39:16.619 [2024-07-22 19:43:35.358189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.358199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.358257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.358268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.358594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.358603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.358933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.358943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.359296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.359306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.359639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.359648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.360022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.360031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.360387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.360396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.360740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.360750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.361091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.361101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.361475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.361484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.361820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.361830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.362181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.362191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.362634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.362644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.362975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.362987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.363345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.363368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.363722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.363731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.364028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.364037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.364479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.364489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.364820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.364829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.365193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.365213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.365570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.365579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.365908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.365917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.366276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.366286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.366640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.366649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.367028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.367037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.367398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.367408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.367766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.367775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.368109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.368119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.368471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.368481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.368903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.368912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.369241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.369251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.620 qpair failed and we were unable to recover it. 00:39:16.620 [2024-07-22 19:43:35.369499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.620 [2024-07-22 19:43:35.369508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.369893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.369902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.370254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.370264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.370523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.370532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.370895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.370904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.371313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.371322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.371605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.371614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.371971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.371980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.372318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.372328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.372703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.372715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.372773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.372784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.373112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.373121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.373481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.373491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.373822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.373831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.374019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.374029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.374407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.374417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.374771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.374780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.375135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.375144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.375487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.375497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.375850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.375859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.376196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.376208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.376384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.376394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.376741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.376751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.376947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.376960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.377337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.377347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.377572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.377582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.377931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.377941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.378340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.378350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.378698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.378708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.379061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.379070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.379399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.379409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.379763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.379772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.380102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.380111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.380459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.380469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.380848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.380857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.381186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.381195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.381539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.381549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.621 [2024-07-22 19:43:35.381901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.621 [2024-07-22 19:43:35.381910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.621 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.382243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.382253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.382611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.382620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.382949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.382958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.383346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.383356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.383712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.383721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.384050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.384059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.384418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.384428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.384782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.384791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.385124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.385133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.385491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.385500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.385876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.385886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.386237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.386247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.386608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.386617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.386867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.386876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.387186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.387195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.387575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.387585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.387962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.387971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.388318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.388328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.388677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.388687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.389008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.389017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.389372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.389382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.389749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.389758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.390045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.390055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.390506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.390516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.390858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.390867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.391199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.391218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.391488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.391498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.391896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.391905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.392282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.392292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.392678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.392687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.392996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.393005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.393358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.393369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.393736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.393745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.394073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.394082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.394450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.394460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.394790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.394802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.395149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.622 [2024-07-22 19:43:35.395158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.622 qpair failed and we were unable to recover it. 00:39:16.622 [2024-07-22 19:43:35.395552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.395562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.395890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.395899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.396261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.396271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.396658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.396667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.397012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.397021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.397379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.397388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.397741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.397750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.398090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.398099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.398460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.398470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.398824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.398833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.399210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.399219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.399600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.399610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.399967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.399976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.400383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.400393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.400590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.400600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.400986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.400995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.401329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.401338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.401693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.401701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.402034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.402044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.402404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.402413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.402762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.402772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.403126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.403135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.403476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.403485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.403837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.403846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.404037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.404047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.404221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.404231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.404584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.404593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.404945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.404955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.405307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.405320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.405663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.405672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.406025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.406034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.406377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.406386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.406745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.406754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.407119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.407127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.407487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.407497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.407853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.407864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.408215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.408225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.408585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.623 [2024-07-22 19:43:35.408594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.623 qpair failed and we were unable to recover it. 00:39:16.623 [2024-07-22 19:43:35.408956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.408965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.409342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.409351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.409722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.409732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.410083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.410093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.410535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.410544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.410886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.410896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.411249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.411258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.411606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.411615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.411969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.411978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.412311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.412320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.412672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.412681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.413007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.413016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.413357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.413366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.413747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.413756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.414085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.414094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.414345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.414354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.414716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.414724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.415097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.415107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.415433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.415443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.415690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.415699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.416048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.416057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.416436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.416446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.416778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.416787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.417154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.417164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.417348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.417362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.417718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.417727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.418096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.418105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.418422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.418431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.418783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.418792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.419121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.419131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.419564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.419575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.419963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.419973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.420330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.420340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.420674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.420682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.421034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.421043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.421400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.421409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.624 qpair failed and we were unable to recover it. 00:39:16.624 [2024-07-22 19:43:35.421800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.624 [2024-07-22 19:43:35.421809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.422177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.422186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.422521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.422531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.422846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.422855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.423209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.423218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.423518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.423527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.423906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.423915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.424245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.424255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.424615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.424624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.424959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.424968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.425295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.425305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.425548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.425557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.425850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.425865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.426219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.426229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.426579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.426588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.426971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.426981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.427333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.427342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.427694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.427704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.428056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.428066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.428421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.428430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.428806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.428815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.429190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.429199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.429548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.429558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.429745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.429756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.430157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.430166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.430508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.430518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.430855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.430864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.431217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.431226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.431565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.431573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.431914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.431924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.432288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.625 [2024-07-22 19:43:35.432298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.625 qpair failed and we were unable to recover it. 00:39:16.625 [2024-07-22 19:43:35.432654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.432663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.432993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.433002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.433352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.433361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.433698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.433709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.434047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.434056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.434376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.434385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.434796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.434806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.435172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.435182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.435533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.435543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.435871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.435881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.436251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.436261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.436548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.436557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.436912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.436921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.437269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.437279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.437664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.437673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.438001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.438010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.438371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.438380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.438758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.438767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.439099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.439108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.439365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.439375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.439741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.439750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.440082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.440095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.440334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.440343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.440718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.440728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.440949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.440959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.441316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.441326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.441710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.441719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.442073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.442083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.442457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.442467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.442796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.442805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.443155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.443164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.443575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.443584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.443913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.443922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.444276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.444285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.444647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.444656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.444991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.445000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.445370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.445379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.626 [2024-07-22 19:43:35.445755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.626 [2024-07-22 19:43:35.445764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.626 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.446098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.446107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.446452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.446462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.446825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.446834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.447170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.447179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.447534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.447543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.447883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.447893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.448231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.448240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.448589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.448598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.448975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.448985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.449339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.449349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.449711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.449720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.450073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.450082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.450415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.450425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.450779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.450788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.451115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.451125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.451456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.451465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.451840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.451849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.452083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.452092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.452470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.452479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.452822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.452833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.453182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.453191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.453551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.453560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.453808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.453817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.454196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.454208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.454569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.454579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.454834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.454844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.455185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.455194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.455533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.455542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.455916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.455925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.456258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.456268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.456641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.456650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.457005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.457015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.457215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.457230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.457593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.457603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.457931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.457940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.458274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.458283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.458631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.458640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.627 [2024-07-22 19:43:35.458978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.627 [2024-07-22 19:43:35.458987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.627 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.459163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.459173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.459515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.459525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.459899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.459908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.460319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.460328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.460712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.460721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.461043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.461052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.461308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.461318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.461647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.461659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.462013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.462022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.462277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.462290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.462661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.462670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.462997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.463006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.463333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.463343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.463704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.463713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.464039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.464049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.464336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.464345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.464707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.464717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.465068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.465077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.465423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.465433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.465688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.465696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.466066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.466075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.466352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.466361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.466734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.466743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.467076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.467085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.467417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.467427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.467869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.467878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.468229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.468238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.468487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.468496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.468730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.468740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.469120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.469130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.469493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.469502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.469840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.469849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.470081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.470091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.470443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.470452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.470782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.470792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.471144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.471154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.471582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.628 [2024-07-22 19:43:35.471591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.628 qpair failed and we were unable to recover it. 00:39:16.628 [2024-07-22 19:43:35.471950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.471960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.472204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.472213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.472551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.472560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.472846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.472856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.473222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.473231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.473495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.473504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.473863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.473873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.474160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.474170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.474524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.474534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.474863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.474873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.475226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.475235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.475667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.475676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.476061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.476070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.476413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.476423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.476784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.476794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.477145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.477154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.477490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.477499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.477851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.477861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.478219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.478228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.478598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.478606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.478775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.478785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.479127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.479136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.479463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.479472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.479825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.479834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.480267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.480276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.480631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.480646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.480838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.480848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.481161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.481170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.481344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.481354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.481741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.481750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.482072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.482081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.482422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.482431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.482786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.482795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.483130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.483139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.483470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.483479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.483836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.483845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.484252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.484276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.629 [2024-07-22 19:43:35.484520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.629 [2024-07-22 19:43:35.484531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.629 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.484883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.484892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.485223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.485233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.485573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.485582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.485952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.485962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.486314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.486323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.486654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.486664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.487018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.487027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.487448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.487457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.487790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.487799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.488098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.488107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.488491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.488500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.488833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.488842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.489194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.489206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.489562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.489571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.489899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.489908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.490239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.490249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.490608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.490617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.490948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.490957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.491314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.491324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.491699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.491709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.492065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.492073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.492425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.492434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.492787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.492796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.493167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.493176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.493580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.493590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.493930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.493940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.494370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.494379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.494718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.494726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.495087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.495096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.495466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.495475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.495807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.495816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.630 [2024-07-22 19:43:35.495988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.630 [2024-07-22 19:43:35.495998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.630 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.496360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.496369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.496811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.496821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.497175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.497184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.497519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.497528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.497881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.497890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.498143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.498152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.498511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.498521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.498834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.498845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.499189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.499199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.499569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.499578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.499908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.499917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.500266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.500276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.500601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.500610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.500834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.500843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.501177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.501188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.501432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.501441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.501783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.501792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.502164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.502174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.502530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.502539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.502910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.502920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.503274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.503283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.503635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.503644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.503996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.504005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.504367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.504376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.504763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.504772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.505130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.505139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.505489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.505499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.505779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.505788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.506143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.506152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.506499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.506510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.506864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.506876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.507228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.507238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.507606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.507615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.507972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.631 [2024-07-22 19:43:35.507981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.631 qpair failed and we were unable to recover it. 00:39:16.631 [2024-07-22 19:43:35.508320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.508330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.508720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.508729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.509028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.509037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.509384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.509393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.509774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.509783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.510076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.510085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.510414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.510423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.510798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.510808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.511160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.511170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.511517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.511526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.511856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.511865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.512226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.512235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.512603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.512611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.512951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.512963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.513219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.513228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.513575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.513585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.513947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.513956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.514292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.514302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.514675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.514684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.515042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.515051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.515398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.515407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.515768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.515777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.516133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.516141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.516476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.516486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.516838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.516847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.517099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.517108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.517508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.517517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.517870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.517880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.518232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.518242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.518440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.518449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.518843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.518851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.519181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.519190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.519542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.519552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.519736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.519746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.520129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.520139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.520496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.520505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.520840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.520849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.632 [2024-07-22 19:43:35.521231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.632 [2024-07-22 19:43:35.521241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.632 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.521435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.521445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.521625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.521635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.521946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.521956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.522337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.522346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.522556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.522566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.522938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.522947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.523275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.523285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.523671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.523680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.524009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.524018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.524347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.524357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.524727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.524737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.525070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.525079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.525414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.525424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.525850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.525859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.526190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.526198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.526548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.526560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.526911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.526919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.527243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.527253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.527623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.527632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.527844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.527854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.528206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.528216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.528568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.528578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.528922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.528935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.529370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.529379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.529717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.529726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.530078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.530087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.530415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.530425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.530835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.530844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.531174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.531183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.531529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.531539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.531892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.531901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.532269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.532279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.532644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.532653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.533014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.533023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.533358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.533368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.533596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.533606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.533790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.633 [2024-07-22 19:43:35.533800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.633 qpair failed and we were unable to recover it. 00:39:16.633 [2024-07-22 19:43:35.534126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.534135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.534472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.534481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.534647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.534656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.534984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.534994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.535350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.535360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.535653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.535663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.536035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.536045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.536378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.536387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.536738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.536746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.536928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.536938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.537297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.537307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.537636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.537646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.537997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.538006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.538358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.538367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.538734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.538743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.539086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.539095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.539448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.539458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.539808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.539817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.540187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.540197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.540544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.540553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.540903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.540912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.541264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.541274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.541628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.541637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.542011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.542028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.542377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.542387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.542732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.542741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.543094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.543103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.543470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.543480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.543809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.543818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.544174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.544183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.544534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.544543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.544877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.544886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.545240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.545250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.545573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.545584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.634 qpair failed and we were unable to recover it. 00:39:16.634 [2024-07-22 19:43:35.545951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.634 [2024-07-22 19:43:35.545960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.546287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.546297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.546650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.546660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.546974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.546984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.547341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.547351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.547608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.547617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.547932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.547941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.548314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.548324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.548671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.548681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.549029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.549040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.549394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.549403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.549718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.549728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.550088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.550097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.550355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.550365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.550716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.550725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.551054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.551067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.635 [2024-07-22 19:43:35.551426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.635 [2024-07-22 19:43:35.551435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.635 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.551862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.551872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.552235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.552245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.552597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.552606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.553007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.553016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.553346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.553356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.553731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.553740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.554069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.554078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.554388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.554400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.554744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.554754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.555106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.555115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.555487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.555497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.555836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.555845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.556099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.556108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.556553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.556563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.556892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.556902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.557277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.557286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.557670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.557680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.558037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.558045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.558376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.558385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.558757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.558765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.559094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.559103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.559434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.559444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.559798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.559807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.560138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.560147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.560498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.560507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.560873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.560882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.908 qpair failed and we were unable to recover it. 00:39:16.908 [2024-07-22 19:43:35.561219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.908 [2024-07-22 19:43:35.561228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.561668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.561678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.562034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.562043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.562382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.562394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.562755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.562764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.563104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.563115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.563488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.563498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.563682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.563693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.564062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.564072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.564408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.564418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.564765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.564775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.564992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.565002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.565355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.565364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.565716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.565725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.566132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.566141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.566322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.566333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.566720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.566730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.567064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.567073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.567416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.567426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.567605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.567615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.567981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.567991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.568270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.568281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.568684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.568693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.569022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.569032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.569333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.569343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.569750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.569761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.569945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.569955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.570283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.570292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.570524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.570534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.570727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.570738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.571102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.571111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.571450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.571460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.571817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.571833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.572209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.572219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.572384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.572394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.572748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.572758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.572950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.572963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.909 [2024-07-22 19:43:35.573388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.909 [2024-07-22 19:43:35.573399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.909 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.573749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.573758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.574188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.574197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.574573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.574583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.574933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.574942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.575295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.575305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.575644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.575654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.576082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.576091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.576442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.576451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.576820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.576830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.577180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.577189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.577423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.577432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.577803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.577813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.578147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.578156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.578535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.578544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.578899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.578908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.579242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.579252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.579618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.579627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.579967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.579977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.580334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.580344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.580649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.580659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.580901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.580910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.581212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.581221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.581584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.581593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.581864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.581874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.582230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.582240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.582633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.582642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.583001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.583012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.583408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.583417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.583850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.583859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.584199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.584212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.584556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.584565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.584936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.584945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.585151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.585161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.585522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.585532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.585859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.585868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.586224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.586234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.586590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.910 [2024-07-22 19:43:35.586600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.910 qpair failed and we were unable to recover it. 00:39:16.910 [2024-07-22 19:43:35.586954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.586963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.587293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.587303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.587689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.587698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.588032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.588042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.588399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.588408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.588753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.588762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.589097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.589106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.589428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.589437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.589765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.589774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.590098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.590107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.590476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.590485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.590815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.590824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.591181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.591199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.591564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.591574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.591911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.591920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.592316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.592325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.592733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.592743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.593094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.593103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.593464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.593474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.593725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.593734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.594086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.594095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.594426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.594437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.594653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.594663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.595022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.595031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.595428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.595438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.595774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.595787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.596140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.596151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.596508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.596517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.596848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.596857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.597224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.597233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.597587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.597596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.597924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.597933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.598286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.598295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.598626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.598635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.598987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.598996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.599350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.599359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.599709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.599725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.600075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.911 [2024-07-22 19:43:35.600084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.911 qpair failed and we were unable to recover it. 00:39:16.911 [2024-07-22 19:43:35.600423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.600433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.600828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.600837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.601165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.601174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.601525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.601535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.601905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.601914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.602245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.602255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.602611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.602620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.602998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.603007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.603215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.603225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.603571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.603581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.603913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.603923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.604280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.604290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.604656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.604665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.605023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.605032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.605218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.605228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.605411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.605421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.605659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.605668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.605919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.605929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.606300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.606310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.606651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.606660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.607015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.607024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.607378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.607387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.607733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.607742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.608112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.608122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3185427 Killed "${NVMF_APP[@]}" "$@" 00:39:16.912 [2024-07-22 19:43:35.608380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.608390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.608575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.608585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 [2024-07-22 19:43:35.608989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.609000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:39:16.912 [2024-07-22 19:43:35.609377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.609390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.912 qpair failed and we were unable to recover it. 00:39:16.912 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:16.912 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:16.912 [2024-07-22 19:43:35.609807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.912 [2024-07-22 19:43:35.609817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:16.913 [2024-07-22 19:43:35.610147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.610158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:16.913 [2024-07-22 19:43:35.610457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.610468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.610800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.610810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.611056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.611066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.611408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.611418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.611613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.611624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.611962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.611972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.612314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.612324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.612659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.612668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.613028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.613037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.613398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.613408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.613787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.613796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.614192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.614206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.614541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.614551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.614807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.614817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.615062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.615072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.615344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.615354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.615726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.615735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.616065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.616075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.616476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.616486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.616856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.616865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.617214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.617224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.617564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.617577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3186457 00:39:16.913 [2024-07-22 19:43:35.617826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.617835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3186457 00:39:16.913 [2024-07-22 19:43:35.618092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.618102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.618360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.618370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3186457 ']' 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:16.913 [2024-07-22 19:43:35.618750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.618760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:16.913 [2024-07-22 19:43:35.619007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.619017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:16.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:16.913 [2024-07-22 19:43:35.619287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.619297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:16.913 [2024-07-22 19:43:35.619540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.619549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 19:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:16.913 [2024-07-22 19:43:35.619775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.619785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.913 [2024-07-22 19:43:35.620111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.913 [2024-07-22 19:43:35.620122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.913 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.620480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.620489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.620826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.620835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.621197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.621210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.621577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.621586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.621924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.621933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.622293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.622302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.622651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.622661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.623008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.623018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.623421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.623431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.623684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.623693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.624098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.624108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.624289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.624299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.624664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.624674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.625085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.625100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.625463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.625474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.625609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.625620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.625929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.625940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.626294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.626305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.626536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.626546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.626880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.626891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.627179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.627190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.627555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.627567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.627926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.627937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.628317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.628329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.628664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.628675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.629027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.629038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.629285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.629296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.629621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.629632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.629987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.629998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.630356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.630367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.630607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.630618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.630974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.630986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.631344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.631356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.631717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.631729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.632082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.632093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.632467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.632478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.632829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.632839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.914 [2024-07-22 19:43:35.633063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.914 [2024-07-22 19:43:35.633074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.914 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.633334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.633345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.633721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.633732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.634089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.634100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.634477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.634488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.634842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.634852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.635228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.635240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.635616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.635628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.635981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.635992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.636361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.636372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.636735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.636746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.637102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.637114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.637453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.637465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.637727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.637739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.638117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.638128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.638489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.638500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.638846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.638859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.639213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.639229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.639573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.639584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.639953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.639964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.640288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.640299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.640669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.640679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.641041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.641051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.641324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.641334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.641707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.641718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.641965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.641977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.642371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.642382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.642581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.642593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.642924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.642934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.643294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.643307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.643525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.643536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.643901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.643911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.644274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.644285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.644677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.644687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.645064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.645075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.645350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.645360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.645737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.645748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.646114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.646124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.915 [2024-07-22 19:43:35.646516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.915 [2024-07-22 19:43:35.646527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.915 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.646774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.646785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.647148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.647158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.647514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.647526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.647788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.647800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.648001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.648013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.648368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.648379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.648767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.648778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.648972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.648983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.649334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.649345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.649738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.649749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.650114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.650126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.650507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.650518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.650884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.650896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.651343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.651353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.651708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.651719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.652096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.652106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.652356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.652367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.652741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.652753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.653121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.653132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.653495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.653506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.653955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.653966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.654269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.654281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.654562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.654572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.654798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.654808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.655170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.655180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.655580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.655591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.655947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.655958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.656347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.656358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.656712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.656723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.657087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.657097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.657472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.657483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.657872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.657883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.658247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.658258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.658638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.658649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.658924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.658935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.659322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.659333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.916 [2024-07-22 19:43:35.659519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.916 [2024-07-22 19:43:35.659529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.916 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.659766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.659777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.660146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.660156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.660392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.660402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.660790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.660800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.660882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.660892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.660951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.660964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.661326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.661338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.661744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.661755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.662078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.662089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.662435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.662446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.662763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.662773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.662988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.662999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.663212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.663224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.663599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.663611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.664012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.664023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.664412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.664423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.664641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.664651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.664847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.664858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.665232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.665243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.665639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.665650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.666007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.666021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.666359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.666370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.666608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.666618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.667011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.667021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.667379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.667391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.667608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.667618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.667998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.668009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.668236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.668247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.668564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.668575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.668791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.668801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.669219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.669232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.917 [2024-07-22 19:43:35.669590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.917 [2024-07-22 19:43:35.669601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.917 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.669956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.669967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.670358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.670370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.670675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.670687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.670888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.670899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.671253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.671264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.671632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.671643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.671864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.671875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.672109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.672119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.672469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.672480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.672713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.672724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.672978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.672988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.673375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.673386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.673618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.673628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.673831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.673841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.674171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.674182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.674690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.674701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.675062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.675073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.675421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.675432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.675807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.675817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.676210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.676221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.676596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.676607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.676967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.676977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.677339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.677351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.677729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.677740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.677966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.677977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.678312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.678323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.678731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.678742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.679139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.679150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.679532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.679545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.679911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.679922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.680285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.680295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.680661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.680671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.681035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.681046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.681418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.681430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.681798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.681808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.682155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.682165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.682403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.682414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.918 [2024-07-22 19:43:35.682801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.918 [2024-07-22 19:43:35.682815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.918 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.683187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.683197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.683546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.683557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.683752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.683764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.684128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.684139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.684343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.684355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.684756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.684768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.685160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.685171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.685524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.685535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.685932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.685944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.686298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.686309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.686674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.686685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.687053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.687064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.687426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.687437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.687826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.687836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.688195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.688209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.688459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.688470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.688746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.688757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.689114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.689127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.689509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.689520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.689741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.689751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.690116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.690128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.690501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.690512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.690874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.690886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.691250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.691261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.691639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.691650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.692040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.692051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.692456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.692467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.692871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.692881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.693243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.693254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.693630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.693641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.694001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.694012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.694391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.694402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.694758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.694768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.695160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.695171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.695602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.695614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.695969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.695980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.696239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.696250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.919 qpair failed and we were unable to recover it. 00:39:16.919 [2024-07-22 19:43:35.696625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.919 [2024-07-22 19:43:35.696637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.696839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.696849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.697222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.697233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.697463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.697473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.697856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.697867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.698234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.698246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.698622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.698632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.698993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.699004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.699221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.699231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.699577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.699588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.699850] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:16.920 [2024-07-22 19:43:35.699946] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:16.920 [2024-07-22 19:43:35.699953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.699962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.700320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.700331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.700515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.700524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.700908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.700920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.701284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.701295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.701497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.701508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.701854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.701865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.702217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.702228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.702444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.702455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.702832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.702843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.703224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.703235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.703592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.703603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.703816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.703827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.704194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.704209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.704569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.704582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.704804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.704816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.705163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.705178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.705284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.705295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.705629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.705640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.705997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.706008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.706355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.706366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.706596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.706607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.706987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.707000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.707355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.707366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.707739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.707751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.708130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.708141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.920 qpair failed and we were unable to recover it. 00:39:16.920 [2024-07-22 19:43:35.708492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.920 [2024-07-22 19:43:35.708504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.708862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.708874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.709094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.709105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.709325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.709337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.709555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.709568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.709929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.709940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.710295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.710307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.710666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.710678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.711057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.711069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.711433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.711445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.711806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.711817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.712166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.712177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.712579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.712591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.712949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.712960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.713316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.713327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.713650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.713662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.714040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.714051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.714423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.714434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.714753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.714764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.715124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.715135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.715484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.715495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.715848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.715859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.716218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.716230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.716445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.716457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.716800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.716811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.717158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.717169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.717519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.717530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.717885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.921 [2024-07-22 19:43:35.717896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.921 qpair failed and we were unable to recover it. 00:39:16.921 [2024-07-22 19:43:35.718102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.718113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.718502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.718514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.718882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.718893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.719250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.719262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.719638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.719648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.720004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.720016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.720276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.720288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.720534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.720546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.720925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.720938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.721297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.721309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.721672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.721683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.722040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.722051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.722395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.722407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.722467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.722479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.722736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.722747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.723031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.723041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.723401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.723412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.723762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.723773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.724131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.724142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.724492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.724503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.922 [2024-07-22 19:43:35.724861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.922 [2024-07-22 19:43:35.724871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.922 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.725140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.725150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.725495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.725506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.725883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.725893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.726255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.726266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.726671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.726681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.727049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.727064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.727314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.727325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.727693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.727706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.728079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.728091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.728468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.728479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.728823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.728835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.729199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.729213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.729490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.729500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.729857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.729868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.730238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.730249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.730620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.730630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.731008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.731018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.731374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.731384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.731585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.731595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.731823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.731834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.732207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.732218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.732460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.732470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.732831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.732841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.733198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.733212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.733540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.733551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.733755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.733765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.734118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.734128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.923 [2024-07-22 19:43:35.734473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.923 [2024-07-22 19:43:35.734486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.923 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.734680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.734691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.735059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.735070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.735430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.735442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.735796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.735807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.736156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.736166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.736510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.736522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.736870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.736880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.737237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.737248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.737446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.737456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.737664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.737675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.737988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.737999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.738363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.738373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.738750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.738761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.739077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.739087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.739440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.739451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.739676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.739686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.740076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.740086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.740278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.740288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.740674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.740685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.741031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.741042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.741423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.741433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.741642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.741652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.741894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.741905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.742260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.742271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.742502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.742513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.742901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.742912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.743274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.743285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.743654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.743664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.744044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.924 [2024-07-22 19:43:35.744054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.924 qpair failed and we were unable to recover it. 00:39:16.924 [2024-07-22 19:43:35.744409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.744420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.744638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.744649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.745006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.745016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.745394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.745405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.745759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.745770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.746125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.746136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.746504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.746515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.746901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.746912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.747285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.747295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.747672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.747682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.748071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.748083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.748305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.748317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.748536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.748551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.748920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.748931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.749293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.749304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.749666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.749677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.750109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.750119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.750446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.750458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.750888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.750900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.751246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.751257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.751613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.751624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.751974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.751984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.752340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.752351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.752690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.752701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.752957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.752968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.753210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.753221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.753565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.753575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.753942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.753953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.925 [2024-07-22 19:43:35.754307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.925 [2024-07-22 19:43:35.754318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.925 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.754682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.754693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.755070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.755081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.755450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.755462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.755837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.755847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.756206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.756217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.756577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.756587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.756812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.756823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.757082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.757092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.757467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.757478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.757834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.757845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.758219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.758231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.758479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.758489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.758848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.758859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.759057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.759068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.759417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.759428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.759687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.759698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.759888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.759899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.760222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.760233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.760575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.760586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.760922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.760933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.761154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.761164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.761512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.761525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.761800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.761810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.762170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.762180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.762560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.762571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.762920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.762930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.763150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.763160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.763484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.763495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.763865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.763876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.926 [2024-07-22 19:43:35.764311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.926 [2024-07-22 19:43:35.764324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.926 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.764676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.764687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.765040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.765050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.765422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.765432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.765787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.765798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.766082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.766092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.766498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.766509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.766870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.766881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.767245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.767257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.767459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.767470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.767789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.767799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 EAL: No free 2048 kB hugepages reported on node 1 00:39:16.927 [2024-07-22 19:43:35.768160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.768171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.768533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.768545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.768941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.768952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.769215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.769226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.769609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.769620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.769824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.769834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.770179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.770190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.770372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.770386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.770676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.770687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.770917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.770926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.771118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.771129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.771539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.771550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.771902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.771914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.772312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.772323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.772699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.772710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.773094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.773104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.773507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.773518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.773834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.773845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.774135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.774145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.774500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.774511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.927 [2024-07-22 19:43:35.774859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.927 [2024-07-22 19:43:35.774869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.927 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.775179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.775192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.775550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.775560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.775828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.775839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.776207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.776218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.776644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.776655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.776999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.777009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.777369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.777380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.777712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.777723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.778049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.778061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.778423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.778434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.778801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.778811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.779163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.779173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.779359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.779370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.779748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.779759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.780137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.780147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.780437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.780447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.780800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.780810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.781126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.781137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.781445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.781456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.781745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.781756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.782114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.782124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.782451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.782463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.782824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.782834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.783184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.783195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.928 qpair failed and we were unable to recover it. 00:39:16.928 [2024-07-22 19:43:35.783565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.928 [2024-07-22 19:43:35.783575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.783821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.783831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.784181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.784191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.784617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.784627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.784983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.784993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.785277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.785287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.785665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.785675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.786023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.786033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.786390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.786401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.786771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.786782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.787137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.787148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.787517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.787528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.787887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.787899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.788123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.788135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.788500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.788511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.788855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.788867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.789229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.789242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.789617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.789627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.789818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.789828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.790155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.790166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.790522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.790532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.790902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.790912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.791264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.791275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.791481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.791492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.791863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.791873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.792093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.792104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.792479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.792493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.792846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.792856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.793207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.793218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.793357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.793368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.793747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.793758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.929 [2024-07-22 19:43:35.794110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.929 [2024-07-22 19:43:35.794120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.929 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.794485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.794496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.794870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.794881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.795074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.795085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.795443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.795454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.795810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.795821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.796199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.796213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.796579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.796590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.796941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.796952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.797303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.797314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.797688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.797699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.798050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.798061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.798332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.798343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.798707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.798717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.799091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.799101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.799453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.799465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.799849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.799859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.800216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.800226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.800609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.800619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.800970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.800981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.801411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.801422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.801766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.801776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.802156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.802165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.802544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.802554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.802904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.802913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.803260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.803271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.803685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.803694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.804046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.804055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.804414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.804423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.804772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.804781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.805154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.805164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.930 [2024-07-22 19:43:35.805408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.930 [2024-07-22 19:43:35.805418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.930 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.805791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.805803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.806188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.806198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.806562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.806574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.806888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.806899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.807272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.807283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.807644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.807655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.808031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.808042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.808256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.808268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.808600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.808611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.808978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.808991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.809310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.809321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.809632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.809644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.809998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.810009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.810356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.810368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.810697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.810708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.811051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.811063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.811412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.811424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.811774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.811785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.812162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.812174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.812551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.812563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.812780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.812792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.813149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.813160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.813536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.813547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.813908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.813920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.814280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.814291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.814674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.814686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.814906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.814918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.815277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.815292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.815656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.815667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.816026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.816038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.816357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.816368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.816731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.816742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.817095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.931 [2024-07-22 19:43:35.817106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.931 qpair failed and we were unable to recover it. 00:39:16.931 [2024-07-22 19:43:35.817455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.817469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.817841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.817852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.818208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.818220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.818539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.818551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.818881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.818892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.819268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.819278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.819657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.819668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.820021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.820031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.820234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.820245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.820564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.820575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.820929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.820939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.821292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.821302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.821592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.821602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.821812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.821822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.822049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.822059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.822491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.822502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.822856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.822867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.823066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.823076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.823398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.823410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.823761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.823772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.824130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.824141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.824486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.824497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.824844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.824855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.825210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.825222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.825626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.825636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.825974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.825986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.826356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.826366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.826738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.826750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.827178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.827188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.827549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.827561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.827924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.827935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.828283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.828293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.828648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.828658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.932 qpair failed and we were unable to recover it. 00:39:16.932 [2024-07-22 19:43:35.829029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.932 [2024-07-22 19:43:35.829039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.829391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.829410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.829601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.829611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.829990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.830001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.830358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.830369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.830734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.830745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.831097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.831109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.831509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.831524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.831873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.831884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.832064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.832076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.832362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.832373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.832735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.832745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.833162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.833173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.833526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.833536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.833891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.833902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.834264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.834275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.834478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.834489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.834855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.834865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.835219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.835231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.835489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.835499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.835838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.835849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.836207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.836218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.836539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.836550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.836782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.836793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.836965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.836977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.837313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.837339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.837704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.837715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.838067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.838078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.838295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.838306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.838593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.933 [2024-07-22 19:43:35.838603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.933 qpair failed and we were unable to recover it. 00:39:16.933 [2024-07-22 19:43:35.838955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.838965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.839316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.839327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.839672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.839683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.840036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.840046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.840403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.840414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.840767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.840778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.841119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.841129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.841483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.841494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.841847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.841857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.842209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.842220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.842492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:16.934 [2024-07-22 19:43:35.842556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.842566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.842918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.842929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.843189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.843207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.843615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.843626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.843976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.843986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.844231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.844242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.844645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.844656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.845008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.845019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.845360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.845372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.845724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.845734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.846080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.846092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.846469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.846480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.846855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.846866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.847221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.847234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.847593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.847603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.847960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.847972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:16.934 [2024-07-22 19:43:35.848349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:16.934 [2024-07-22 19:43:35.848361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:16.934 qpair failed and we were unable to recover it. 00:39:17.208 [2024-07-22 19:43:35.848566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.208 [2024-07-22 19:43:35.848578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.208 qpair failed and we were unable to recover it. 00:39:17.208 [2024-07-22 19:43:35.848800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.208 [2024-07-22 19:43:35.848813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.208 qpair failed and we were unable to recover it. 00:39:17.208 [2024-07-22 19:43:35.849171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.208 [2024-07-22 19:43:35.849183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.208 qpair failed and we were unable to recover it. 00:39:17.208 [2024-07-22 19:43:35.849532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.208 [2024-07-22 19:43:35.849545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.208 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.849900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.849912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.850267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.850278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.850628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.850639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.851012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.851024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.851426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.851438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.851814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.851825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.852220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.852231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.852604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.852615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.853046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.853057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.853330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.853343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.853595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.853605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.853986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.853998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.854355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.854366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.854782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.854794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.855145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.855157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.855528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.855541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.855804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.855815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.855982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.855994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.856342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.856353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.856628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.856640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.857031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.857042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.857448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.857458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.857805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.857816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.858173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.858184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.858556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.858568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.858920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.858930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.859286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.859298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.859678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.859688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.860027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.860042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.860233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.860245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.860560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.860570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.860828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.860838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.861209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.861220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.861410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.861421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.861769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.861781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.862133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.862144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.862541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.862551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.862902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.862914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.863269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.863280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.863632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.863643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.864021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.864032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.864384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.864395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.864747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.864758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.865113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.865123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.865471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.865482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.865838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.865849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.866204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.866215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.866579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.866589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.866927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.866938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.867290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.867301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.867671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.867681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.868039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.868049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.868340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.868351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.209 [2024-07-22 19:43:35.868703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.209 [2024-07-22 19:43:35.868713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.209 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.869068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.869079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.869455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.869466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.869838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.869849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.870210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.870221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.870563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.870574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.870773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.870783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.871137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.871148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.871381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.871392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.871771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.871782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.872002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.872011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.872373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.872384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.872641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.872652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.873009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.873021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.873377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.873387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.873760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.873771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.874112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.874123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.874498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.874509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.874861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.874872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.875252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.875263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.875618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.875628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.875974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.875985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.876336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.876347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.876692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.876702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.876896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.876906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.877277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.877288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.877643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.877654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.878029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.878040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.878306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.878317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.878645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.878655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.879039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.879050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.879422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.879434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.879725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.879735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.880090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.880100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.880477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.880488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.880862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.880873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.881221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.881233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.881611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.881622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.881997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.882009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.882254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.882268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.882625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.882636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.882989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.883000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.883349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.883360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.883707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.883718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.884072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.884082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.884279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.884291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.884719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.884730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.885104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.885115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.885485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.885497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.885872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.885883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.886236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.886248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.886578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.886589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.886934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.886945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.887280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.887295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.887499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.887511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.887745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.887757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.888112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.888123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.888493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.888504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.888868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.888879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.889254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.889264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.889618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.889629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.889985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.889996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.890251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.890262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.890535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.890545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.890974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.890985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.891330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.891342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.891700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.891711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.892089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.892100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.892472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.892483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.892879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.892890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.893251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.893262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.893586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.210 [2024-07-22 19:43:35.893597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.210 qpair failed and we were unable to recover it. 00:39:17.210 [2024-07-22 19:43:35.893955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.893966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.894187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.894198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.894558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.894569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.894915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.894927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.895287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.895299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.895520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.895531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.895948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.895959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.896303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.896314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.896677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.896688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.896898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.896908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.897261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.897272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.897645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.897656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.898008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.898021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.898374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.898385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.898738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.898748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.899123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.899133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.899500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.899512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.899878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.899889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.899946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.899956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.900289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.900300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.900566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.900577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.900955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.900968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.901320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.901331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.901700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.901711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.902084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.902095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.902473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.902484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.902842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.902853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.903193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.903207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.903572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.903583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.903934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.903944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.904297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.904308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.904679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.904695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.904913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.904924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.905182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.905194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.905572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.905583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.905987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.905998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.906358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.906369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.906720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.906732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.907019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.907029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.907387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.907398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.907772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.907782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.907998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.908008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.908365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.908375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.908728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.908739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.909114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.909124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.909519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.909531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.909749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.909759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.910116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.910126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.910483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.910494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.910849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.910860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.911213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.911223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.911591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.911602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.911950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.911961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.912157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.912168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.912389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.912400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.912756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.912766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.913138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.913149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.913501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.913512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.913854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.913865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.914223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.914234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.914441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.211 [2024-07-22 19:43:35.914451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.211 qpair failed and we were unable to recover it. 00:39:17.211 [2024-07-22 19:43:35.914818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.914832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.915093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.915103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.915320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.915332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.915713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.915723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.916075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.916086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.916457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.916468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.916839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.916850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.917222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.917233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.917582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.917593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.917950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.917961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.918314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.918325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.918684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.918695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.919041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.919053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.919405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.919418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.919791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.919802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.920177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.920187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.920540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.920550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.920908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.920919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.921273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.921284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.921670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.921681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.922109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.922121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.922288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.922301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.922566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.922576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.922956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.922966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.923329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.923341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.923687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.923698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.924050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.924060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.924260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.924271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.924635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.924646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.925001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.925011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.925368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.925380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.925742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.925753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.926133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.926145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.926497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.926507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.926863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.926878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.927252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.927264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.927492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.927502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.927856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.927867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.928296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.928307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.928753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.928763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.929119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.929132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.929403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.929414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.929669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.929680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.930025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.930036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.930389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.930401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.930623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.930635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.930991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.931002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.931378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.931389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.931734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.931745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.932099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.932109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.932426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.932439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.932812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.932823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.933182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.933192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.933553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.933564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.933995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.934007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.934357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.934368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.934734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.934745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.934994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.935005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.935444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.935456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.935803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.935814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.936169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.936181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.936579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.936591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.936942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.936954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.937328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.937339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.937711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.937722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.938075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.938086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.938534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.938545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.938908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.938919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.939115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.939126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.939496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.939509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.939869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.939880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.212 [2024-07-22 19:43:35.940225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.212 [2024-07-22 19:43:35.940236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.212 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.940436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.940447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.940773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.940784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.940982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.940994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.941195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.941211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.941557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.941569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.941930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.941941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.942299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.942310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.942697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.942708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.943075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.943087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.943465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.943476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.943831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.943842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.944217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.944228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.944580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.944591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.944955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.944965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.945319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.945329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.945698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.945709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.946104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.946114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.946434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.946445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.946797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.946808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.947183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.947194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.947573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.947584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.947938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.947949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.948305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.948316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.948676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.948686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.949039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.949050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.949404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.949415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.949803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.949818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.950023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.950034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.950398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.950409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.950761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.950772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.951128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.951140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.951492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.951503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.951871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.951881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.952237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.952248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.952471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.952481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.952860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.952870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.953222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.953234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.953595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.953606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.953962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.953974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.954349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.954360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.954720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.954732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.954952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.954962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.955162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.955174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.955545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.955556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.955910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.955921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.956268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.956279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.956649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.956660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.957041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.957052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.957410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.957423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.957762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.957774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.958126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.958136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.958370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.958380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.958732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.958742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.959094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.959104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.959476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.959486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.959823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.959834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.960203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.960214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.960542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.960562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.960904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.960916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.961291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.961302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.961617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.961628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.961868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.961878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.962233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.962244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.962618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.962629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.962940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.962952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.963307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.963318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.963666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.963677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.964039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.964051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.964398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.964410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.964763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.964774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.965121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.965133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.213 [2024-07-22 19:43:35.965517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.213 [2024-07-22 19:43:35.965528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.213 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.965863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.965874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.966231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.966243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.966587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.966598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.967008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.967019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.967380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.967391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.967591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.967602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.967923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.967934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.968277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.968288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.968485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.968497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.968836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.968847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.969071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.969082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.969437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.969450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.969800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.969811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.970171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.970182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.970551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.970563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.970945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.970956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.971310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.971325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.971672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.971684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.972116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.972131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.972469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.972481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.972837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.972849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.973207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.973219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.973582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.973593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.973974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.973987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.974357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.974369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.974733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.974745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.975096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.975107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.975467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.975479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.975721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.975732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.976085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.976096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.976471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.976482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.976871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.976882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.977229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.977241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.977601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.977612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.977965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.977977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.978350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.978362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.978732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.978744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.979099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.979110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.979493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.979505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.979875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.979886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.980249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.980261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.980428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.980440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.980805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.980817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.981192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.981216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.981571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.981583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.981805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.981818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.982166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.982177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.982549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.982560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.982754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.982766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.983127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.983139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.983338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.983350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.983726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.983737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.984121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.984132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.984478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.984490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.984682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.984693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.985084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.985095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.985315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.985327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.985682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.985693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.986046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.986057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.986417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.986429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.986737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.986747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.987107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.987118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.214 [2024-07-22 19:43:35.987501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.214 [2024-07-22 19:43:35.987513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.214 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.987888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.987900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.988253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.988264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.988649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.988660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.988966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.988976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.989352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.989362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.989672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.989683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.990053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.990064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.990412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.990423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.990808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.990819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.991163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.991175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.991517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.991528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.991879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.991890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.992223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.992234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.992598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.992608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.992960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.992971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.993302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.993321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.993668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.993679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.994032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.994042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.994394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.994405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.994762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.994777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.995152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.995163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.995513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.995524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.995722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.995733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.996098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.996110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.996336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.996347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.996697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.996708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.997098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.997109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.997458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.997470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.997846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.997857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.998052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.998063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.998427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.998437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.998793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.998803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.999178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.999189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.999535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.999547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:35.999836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:35.999847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.000202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.000213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.000575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.000586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.000944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.000955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.001311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.001323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.001679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.001689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.002064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.002074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.002343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.002354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.002730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.002741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.003095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.003105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.003369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.003379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.003788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.003798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.004015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.004025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.004382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.004393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.004772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.004784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.005137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.005148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.005337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.005349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.005662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.005674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.006038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.006049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.006414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.006425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.006780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.006791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.007138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.007149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.007534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.007545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.007731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.007742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.007982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.007993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.008338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.008349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.008697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.008707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.009059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.009071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.009413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.009425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.009783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.009793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.010164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.010174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.010572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.010583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.010824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.010835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.011213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.011224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.011600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.011611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.011962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.011973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.012317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.012328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.215 qpair failed and we were unable to recover it. 00:39:17.215 [2024-07-22 19:43:36.012674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.215 [2024-07-22 19:43:36.012686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.013061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.013071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.013461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.013474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.013826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.013838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.014189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.014203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.014558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.014569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.014915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.014926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.015399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.015439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.015801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.015814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.016196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.016213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.016584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.016595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.016971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.016983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.017423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.017464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.017845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.017859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.018221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.018234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.018506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.018516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.018879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.018888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.019278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.019288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.019653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.019662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.020017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.020026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.020378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.020388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.020752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.020761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.021105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.021114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.021486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.021495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.021841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.021851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.022260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.022271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.022624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.022635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.022898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.022910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.023293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.023305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.023505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.023519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.023693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.023705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.023946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.023957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.024349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.024361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.024739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.024751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.025111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.025123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.025155] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:17.216 [2024-07-22 19:43:36.025189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:17.216 [2024-07-22 19:43:36.025207] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:17.216 [2024-07-22 19:43:36.025219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:17.216 [2024-07-22 19:43:36.025230] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:17.216 [2024-07-22 19:43:36.025405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:39:17.216 [2024-07-22 19:43:36.025477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.025489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.025648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:39:17.216 [2024-07-22 19:43:36.025763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:39:17.216 [2024-07-22 19:43:36.025837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.025848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 [2024-07-22 19:43:36.025787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.026205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.026217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.026381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.026393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.026766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.026779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.027138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.027149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.027421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.027433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.027798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.027810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.028155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.028166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.028515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.028527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.028903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.028914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.029270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.029282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.029652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.029664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.029927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.029938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.030327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.030338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.030686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.030697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.030979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.030990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.031122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.031133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.031333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.031345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.031570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.031582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.031939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.031950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.032305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.032317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.032737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.032748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.032973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.032984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.033231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.033243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.216 [2024-07-22 19:43:36.033441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.216 [2024-07-22 19:43:36.033452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.216 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.033784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.033795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.034049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.034060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.034414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.034426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.034792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.034803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.035149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.035160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.035519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.035532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.035895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.035906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.036314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.036326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.036434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.036445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.036773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.036784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.037011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.037022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.037350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.037362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.037736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.037748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.038137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.038149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.038595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.038608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.038976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.038991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.039428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.039440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.039814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.039826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.040086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.040102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.040464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.040476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.040838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.040849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.041077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.041089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.041408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.041420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.041777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.041788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.041969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.041981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.042326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.042338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.042697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.042708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.043057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.043069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.043288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.043299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.043493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.043505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.043731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.043743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.043998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.044008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.044243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.044253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.044621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.044632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.044985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.044995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.045218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.045229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.045484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.045496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.045889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.045899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.046207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.046219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.046456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.046467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.046828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.046838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.047062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.047072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.047321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.047333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.047728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.047739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.048098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.048109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.048464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.048476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.048836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.048847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.049209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.049221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.049576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.049586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.049963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.049973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.050324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.050336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.050723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.050734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.051092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.051104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.051475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.051486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.051843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.051854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.052218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.052229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.052603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.052613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.052838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.052849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.053213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.053226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.053601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.053612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.053970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.053981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.054367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.054378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.054745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.054755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.055184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.055194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.055551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.055561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.055975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.055985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.056342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.056354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.056714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.056724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.057081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.057092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.057357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.057368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.057714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.057724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.058077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.217 [2024-07-22 19:43:36.058088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.217 qpair failed and we were unable to recover it. 00:39:17.217 [2024-07-22 19:43:36.058283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.058296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.058707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.058718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.059074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.059085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.059459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.059471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.059831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.059841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.060184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.060195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.060391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.060404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.060734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.060750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.061107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.061118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.061474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.061485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.061842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.061854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.062210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.062221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.062638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.062649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.063033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.063044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.063405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.063416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.063809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.063820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.064000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.064010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.064380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.064391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.064761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.064772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.065128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.065139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.065496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.065507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.065728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.065739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.066132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.066143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.066522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.066533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.066761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.066772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.067207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.067217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.067586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.067598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.067961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.067972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.068369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.068381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.068771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.068782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.069145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.069157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.069524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.069535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.069758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.069769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.069947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.069958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.070128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.070139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.070319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.070330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.070697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.070709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.071085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.071096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.071466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.071478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.071698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.071710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.072104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.072115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.072499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.072511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.072880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.072892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.073247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.073258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.073620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.073631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.073893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.073904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.074265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.074276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.074650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.074660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.074858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.074869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.075212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.075224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.075465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.075476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.075821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.075832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.076193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.076208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.076514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.076525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.076781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.076792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.077149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.077160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.077508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.077520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.077901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.077912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.078276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.078288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.078647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.078658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.079028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.079038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.079399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.079411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.079776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.079787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.079849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.079859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.080183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.080193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.080285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.080295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.080452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.080465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.080834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.080844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.081207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.081220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.081667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.081678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.218 [2024-07-22 19:43:36.082034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.218 [2024-07-22 19:43:36.082045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.218 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.082401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.082419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.082794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.082805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.083003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.083015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.083390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.083403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.083785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.083796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.084017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.084027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.084382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.084393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.084570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.084581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.084935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.084946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.085316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.085327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.085686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.085698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.085894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.085905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.086251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.086263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.086620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.086631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.086890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.086900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.087253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.087264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.087488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.087498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.087816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.087827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.088270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.088281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.088613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.088623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.089014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.089025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.089380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.089391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.089747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.089759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.090114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.090126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.090480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.090491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.090844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.090855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.091214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.091227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.091467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.091478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.091823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.091834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.092198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.092212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.092573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.092583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.092939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.092952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.093327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.093337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.093762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.093773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.093964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.093974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.094296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.094309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.094517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.094528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.094906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.094917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.095285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.095296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.095519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.095529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.095918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.095928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.096100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.096111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.096481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.096492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.096848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.096860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.097238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.097249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.097530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.097540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.097970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.097980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.098181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.098192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.098555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.098566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.098767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.098779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.099157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.099168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.099515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.099526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.099913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.099924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.100282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.100294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.100660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.100671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.101028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.101039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.101420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.101431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.101807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.101819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.102168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.102179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.102440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.102451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.102791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.102802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.103228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.103239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.103691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.103701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.103901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.103912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.104279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.104294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.104654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.219 [2024-07-22 19:43:36.104665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.219 qpair failed and we were unable to recover it. 00:39:17.219 [2024-07-22 19:43:36.105024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.105036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.105212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.105222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.105588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.105598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.105957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.105968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.106313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.106324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.106701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.106712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.107092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.107102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.107461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.107473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.107657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.107667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.108041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.108054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.108280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.108290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.108634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.108644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.109005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.109015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.109262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.109272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.109628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.109639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.109995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.110005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.110177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.110188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.110558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.110569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.110939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.110950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.111340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.111351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.111535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.111545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.111682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.111692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.111933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.111944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.112131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.112143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.112485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.112496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.112855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.112866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.113244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.113257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.113660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.113671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.114034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.114045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.114373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.114384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.114755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.114767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.115127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.115138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.115497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.115509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.115815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.115825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.116211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.116222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.116593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.116603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.116961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.116971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.117290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.117301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.117673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.117683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.118039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.118050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.118247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.118260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.118582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.118597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.119015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.119025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.119375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.119387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.119761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.119771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.120133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.120144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.120368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.120379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.120752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.120762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.121130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.121140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.121500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.121513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.121709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.121720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.122043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.122055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.122500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.122510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.122934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.122945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.123326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.123337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.123715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.123726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.123937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.123948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.124316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.124328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.124550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.124561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.124923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.124934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.125326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.125337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.125706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.125719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.125915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.125929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.126297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.126308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.126676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.126686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.127047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.127057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.127279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.127290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.127648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.127659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.127866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.127876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.128164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.128175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.128382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.128393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.128564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.220 [2024-07-22 19:43:36.128574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.220 qpair failed and we were unable to recover it. 00:39:17.220 [2024-07-22 19:43:36.128898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.128909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.129267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.129278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.129657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.129668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.130023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.130034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.130402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.130413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.130611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.130621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.130851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.130861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.131221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.131232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.131597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.131608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.131963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.131973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.132360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.132372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.132725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.132736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.132798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.132808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.133134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.133144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.133505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.133516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.133891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.133902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.134146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.134156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.134586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.134599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.134950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.134961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.135337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.135349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.135605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.135615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.135962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.135972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.136373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.136384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.136771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.136782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.137008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.137019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.137380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.137390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.137748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.137758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.138132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.138143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.138554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.138565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.138926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.138937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.139254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.139265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.139626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.139637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.139992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.140002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.140364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.140376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.140743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.140754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.141133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.141144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.141504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.141515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.141876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.141887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.142239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.142251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.142468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.142479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.142850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.142860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.143248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.143271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.143631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.143643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.143829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.143840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.144026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.144040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.144243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.144254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.144579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.144590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.144975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.144986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.145207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.145217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.145414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.145424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.145605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.145616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.145965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.145976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.146168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.146178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.146454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.146466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.146569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.146582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.146812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.146823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.147175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.147185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.147547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.147559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.147782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.147792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.221 [2024-07-22 19:43:36.147986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.221 [2024-07-22 19:43:36.147996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.221 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.148368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.148380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.148554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.148566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.148902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.148912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.149094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.149104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.149463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.149473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.149837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.149848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.150209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.150220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.150572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.150583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.150968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.150978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.151244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.151255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.151503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.151513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.151892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.151904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.152282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.152294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.152656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.152666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.153023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.504 [2024-07-22 19:43:36.153034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.504 qpair failed and we were unable to recover it. 00:39:17.504 [2024-07-22 19:43:36.153396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.153407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.153808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.153819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.154177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.154188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.154548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.154559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.154910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.154921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.155308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.155320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.155570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.155579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.155821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.155832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.156026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.156036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.156372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.156384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.156601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.156611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.156975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.156986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.157192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.157205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.157531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.157541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.157889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.157901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.158264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.158276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.158634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.158645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.158983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.158993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.159426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.159437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.159785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.159795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.160152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.160163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.160541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.160552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.160935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.160947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.161300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.161311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.161744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.161754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.162100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.162110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.162475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.162486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.162833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.162843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.163203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.163214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.163403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.163413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.163579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.163589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.163923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.163933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.164303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.164314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.164675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.164686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.164908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.164919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.165274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.165285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.165696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.165707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.505 qpair failed and we were unable to recover it. 00:39:17.505 [2024-07-22 19:43:36.165905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.505 [2024-07-22 19:43:36.165917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.166135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.166146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.166470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.166481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.166678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.166689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.166931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.166941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.167322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.167334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.167395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.167410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.167777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.167787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.168085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.168095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.168468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.168479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.168792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.168803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.169205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.169216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.169566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.169579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.169774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.169784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.170192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.170206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.170469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.170479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.170861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.170872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.171096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.171106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.171308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.171319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.171666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.171677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.172036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.172047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.172265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.172276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.172641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.172652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.173009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.173020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.173374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.173385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.173797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.173807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.174160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.174171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.174549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.174560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.174917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.174928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.175170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.175180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.175369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.175381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.175744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.175754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.176146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.176156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.176541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.176552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.176772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.176781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.177014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.177025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.177425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.177436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.506 [2024-07-22 19:43:36.177824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.506 [2024-07-22 19:43:36.177835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.506 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.178193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.178208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.178561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.178572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.178927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.178937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.179314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.179326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.179653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.179663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.180039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.180051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.180445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.180456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.180837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.180848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.181114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.181123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.181480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.181491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.181850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.181862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.182238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.182250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.182613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.182624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.182986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.182997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.183232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.183246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.183592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.183603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.184000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.184011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.184370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.184382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.184695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.184705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.185096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.185106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.185463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.185474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.185795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.185805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.186230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.186241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.186591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.186601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.186956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.186967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.187315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.187327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.187556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.187567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.187936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.187947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.188307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.188319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.188686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.188696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.188915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.188926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.189303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.189314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.189687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.189713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.190082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.190092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.190398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.190409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.190674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.190684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.191043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.507 [2024-07-22 19:43:36.191055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.507 qpair failed and we were unable to recover it. 00:39:17.507 [2024-07-22 19:43:36.191414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.191425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.191778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.191788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.192165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.192176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.192435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.192445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.192672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.192683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.192999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.193011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.193398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.193409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.193767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.193778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.194144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.194156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.194384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.194395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.194782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.194793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.195210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.195221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.195554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.195565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.195924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.195935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.196156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.196167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.196377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.196389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.196801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.196812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.197058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.197070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.197459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.197470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.197701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.197710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.198079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.198090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.198465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.198477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.198672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.198684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.199049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.199060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.199416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.199427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.199779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.199791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.200208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.200220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.200600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.200611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.200841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.200851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.201203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.201214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.201396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.201406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.201791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.201801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.202159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.202170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.202517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.202529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.202723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.202733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.203096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.203106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.203487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.203497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.508 qpair failed and we were unable to recover it. 00:39:17.508 [2024-07-22 19:43:36.203844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.508 [2024-07-22 19:43:36.203880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.204221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.204234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.204438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.204450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.204827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.204838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.205209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.205221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.205572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.205583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.205938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.205949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.206253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.206264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.206620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.206632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.207017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.207028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.207384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.207395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.207614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.207624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.207700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.207710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.208037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.208047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.208390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.208401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.208755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.208765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.209125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.209136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.209382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.209392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.209763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.209773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.210127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.210138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.210493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.210506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.210874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.211256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.211270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.211659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.211669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.212025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.212036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.212251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.212262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.212511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.212522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.212690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.212700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.213071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.213082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.213406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.213419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.509 [2024-07-22 19:43:36.213759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.509 [2024-07-22 19:43:36.213769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.509 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.213964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.213973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.214326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.214337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.214564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.214574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.214778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.214798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.215162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.215173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.215531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.215542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.215898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.215909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.216128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.216138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.216509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.216520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.216877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.216888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.217281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.217292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.217719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.217729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.218102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.218112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.218485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.218496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.218851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.218861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.219241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.219251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.219483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.219493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.219856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.219867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.220221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.220232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.220454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.220464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.220739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.220749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.221121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.221131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.221485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.221496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.221875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.221886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.222289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.222299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.222666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.222677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.222895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.222905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.223282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.223293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.223500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.223510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.223866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.223878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.224235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.224246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.224591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.224601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.224781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.224791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.225005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.225015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.225339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.225350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.225706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.225717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.225923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.510 [2024-07-22 19:43:36.225933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.510 qpair failed and we were unable to recover it. 00:39:17.510 [2024-07-22 19:43:36.226313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.226323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.226545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.226555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.226881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.226892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.227245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.227255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.227630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.227642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.228006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.228016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.228398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.228409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.228670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.228680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.228850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.228862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.229109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.229119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.229474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.229485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.229867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.229879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.230236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.230247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.230494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.230504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.230866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.230876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.231080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.231090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.231411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.231423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.231788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.231800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.231999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.232013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.232377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.232389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.232748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.232758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.232909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.232920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.233279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.233290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.233638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.233650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.233911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.233922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.234272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.234283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.234505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.234514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.234862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.234873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.235075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.235085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.235490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.235501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.235861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.235872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.236255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.236266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.236628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.236641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.237001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.237012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.237381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.237392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.237598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.237610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.237991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.511 [2024-07-22 19:43:36.238002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.511 qpair failed and we were unable to recover it. 00:39:17.511 [2024-07-22 19:43:36.238365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.238376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.238567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.238578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.238961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.238972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.239318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.239329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.239694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.239706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.239952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.239962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.240183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.240192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.240579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.240589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.240954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.240964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.241327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.241339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.241668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.241678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.242041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.242051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.242320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.242331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.242506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.242517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.242863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.242874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.243332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.243343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.243776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.243787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.244153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.244164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.244405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.244416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.244781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.244792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.245002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.245013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.245204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.245215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.245557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.245567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.245928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.245941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.246346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.246357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.246710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.246722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.247100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.247111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.247335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.247345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.247706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.247717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.248064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.248075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.248427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.248438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.248612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.248622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.248951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.248961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.249282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.249294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.249694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.249704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.250066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.250079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.250468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.250479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.250840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.512 [2024-07-22 19:43:36.250851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.512 qpair failed and we were unable to recover it. 00:39:17.512 [2024-07-22 19:43:36.251184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.251195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.251402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.251413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.251702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.251713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.252089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.252099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.252311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.252322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.252696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.252706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.253027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.253039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.253281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.253297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.253674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.253684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.254087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.254098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.254492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.254503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.254865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.254876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.255254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.255265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.255657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.255668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.256030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.256041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.256398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.256409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.256749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.256759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.257114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.257125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.257484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.257495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.257854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.257865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.258128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.258138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.258329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.258341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.258709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.258719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.259076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.259087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.259453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.259464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.259812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.259823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.260136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.260147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.260405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.260418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.260801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.260812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.261170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.261182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.261396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.513 [2024-07-22 19:43:36.261408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.513 qpair failed and we were unable to recover it. 00:39:17.513 [2024-07-22 19:43:36.261777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.261788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.262011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.262022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.262385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.262396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.262610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.262620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.262908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.262918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.263129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.263139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.263499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.263511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.263892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.263902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.264263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.264276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.264667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.264677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.264895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.264906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.265273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.265283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.265653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.265663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.266049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.266060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.266272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.266282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.266526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.266537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.266717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.266727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.267108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.267119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.267471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.267483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.267886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.267897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.268249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.268260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.268487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.268497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.268866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.268877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.269254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.269266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.269618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.269629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.269854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.269865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.270225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.270236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.270590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.270601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.270949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.270961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.271120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.271130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.271472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.271483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.271874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.271885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.272242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.272254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.272511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.272523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.272880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.272891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.273162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.273172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.273554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.273563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.514 qpair failed and we were unable to recover it. 00:39:17.514 [2024-07-22 19:43:36.273872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.514 [2024-07-22 19:43:36.273882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.274056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.274067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.274310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.274321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.274650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.274663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.274860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.274872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.275206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.275217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.275568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.275579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.275770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.275780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.276101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.276112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.276418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.276432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.276621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.276631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.276832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.276842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.277013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.277023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.277392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.277402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.277641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.277652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.277873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.277884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.278095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.278106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.278450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.278461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.278521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.278531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.278891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.278902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.279278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.279289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.279661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.279672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.280044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.280054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.280259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.280269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.280456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.280466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.280793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.280802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.281159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.281170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.281537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.281548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.281914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.281924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.282334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.282344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.282536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.282547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.282800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.282811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.283031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.283042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.283401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.283411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.283788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.283798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.284190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.284205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.284397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.284407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.284759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.515 [2024-07-22 19:43:36.284769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.515 qpair failed and we were unable to recover it. 00:39:17.515 [2024-07-22 19:43:36.285126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.285138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.285475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.285486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.285835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.285845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.286212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.286223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.286610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.286620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.286986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.286996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.287216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.287227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.287590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.287600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.287959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.287969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.288357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.288368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.288746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.288756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.289113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.289125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.289460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.289471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.289815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.289825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.290224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.290234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.290597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.290608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.290948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.290959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.291337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.291347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.291720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.291730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.292087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.292098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.292363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.292374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.292602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.292612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.292986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.292996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.293370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.293382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.293737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.293747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.294092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.294102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.294448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.294459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.294810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.294821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.295177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.295187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.295567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.295582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.295786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.295797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.296171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.296181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.296570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.296581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.296963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.296974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.297180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.297191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.297587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.297598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.297664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.297672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.516 qpair failed and we were unable to recover it. 00:39:17.516 [2024-07-22 19:43:36.298032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.516 [2024-07-22 19:43:36.298043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.298256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.298267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.298648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.298658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.299024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.299035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.299444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.299455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.299839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.299849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.300112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.300123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.300479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.300489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.300834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.300844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.301105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.301115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.301293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.301306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.301671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.301683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.302038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.302049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.302431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.302442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.302804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.302816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.303176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.303186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.303565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.303576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.303953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.303964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.304319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.304330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.304685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.304696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.305054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.305065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.305415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.305426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.305796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.305806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.306169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.306180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.306551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.306561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.306942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.306953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.307166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.307176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.307547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.307557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.307781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.307791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.308176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.308187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.308398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.308409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.308786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.308796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.309141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.309151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.309509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.309520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.309876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.309886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.310285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.310296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.310499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.310509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.310879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.517 [2024-07-22 19:43:36.310890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.517 qpair failed and we were unable to recover it. 00:39:17.517 [2024-07-22 19:43:36.311246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.311258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.311464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.311476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.311664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.311673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.312037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.312050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.312275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.312285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.312656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.312667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.312878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.312889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.313237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.313247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.313589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.313600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.313788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.313798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.314155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.314166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.314541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.314553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.314891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.314901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.315263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.315273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.315653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.315663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.316042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.316052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.316399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.316410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.316768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.316778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.317136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.317150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.317417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.317429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.317793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.317803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.318154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.318165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.318453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.318464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.318846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.318856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.319217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.319228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.319457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.319467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.319823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.319833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.320243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.320254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.320602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.320613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.320969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.320979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.321228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.321239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.518 [2024-07-22 19:43:36.321445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.518 [2024-07-22 19:43:36.321456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.518 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.321847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.321858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.322217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.322228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.322433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.322444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.322609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.322619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.322834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.322844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.323073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.323083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.323463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.323474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.323854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.323865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.324242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.324252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.324622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.324632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.325047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.325058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.325413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.325426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.325769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.325779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.326135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.326145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.326503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.326514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.326908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.326918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.327273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.327284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.327641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.327652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.328009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.328020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.328400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.328411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.328705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.328716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.329075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.329086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.329439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.329450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.329827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.329838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.330228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.330239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.330439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.330449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.330716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.330726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.330926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.330936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.331301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.331311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.331680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.331691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.332047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.332058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.332272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.332283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.332539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.332550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.332905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.332915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.333116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.333127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.333467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.333478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.333837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.519 [2024-07-22 19:43:36.333847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.519 qpair failed and we were unable to recover it. 00:39:17.519 [2024-07-22 19:43:36.334205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.334216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.334452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.334462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.334658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.334669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.334864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.334874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.335246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.335257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.335604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.335614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.335991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.336002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.336357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.336368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.336742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.336752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.337112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.337122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.337479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.337490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.337648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.337659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.338014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.338024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.338426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.338441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.338783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.338796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.339154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.339165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.339532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.339542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.339763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.339774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.339981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.339991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.340343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.340354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.340738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.340749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.340970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.340982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.341347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.341358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.341752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.341763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.342140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.342151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.342217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.342226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.342541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.342552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.342719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.342738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.343071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.343081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.343461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.343472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.343830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.343841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.344219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.344229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.344492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.344503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.344576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.344587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.344928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.344938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.345299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.345309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.345704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.345715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.520 [2024-07-22 19:43:36.346073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.520 [2024-07-22 19:43:36.346084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.520 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.346509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.346519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.346878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.346888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.347266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.347276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.347651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.347662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.347974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.347984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.348207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.348218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.348590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.348601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.348957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.348968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.349187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.349197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.349555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.349566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.349917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.349928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.350299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.350310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.350664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.350675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.351038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.351048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.351249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.351270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.351517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.351527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.351907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.351920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.352284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.352295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.352682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.352692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.352914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.352925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.353288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.353299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.353654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.353664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.353863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.353874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.354248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.354260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.354451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.354463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.354668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.354678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.354912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.354924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.355312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.355322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.355678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.355689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.356046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.356056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.356437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.356448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.356803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.356814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.356888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.356897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.357173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.357183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.357535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.357546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.357602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.357612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.357848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.521 [2024-07-22 19:43:36.357859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.521 qpair failed and we were unable to recover it. 00:39:17.521 [2024-07-22 19:43:36.358107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.358118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.358492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.358504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.358859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.358875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.359262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.359273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.359631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.359643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.359998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.360008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.360413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.360424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.360643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.360654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.361080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.361090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.361470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.361480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.361802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.361813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.362176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.362187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.362409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.362420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.362766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.362776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.362972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.362983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.363347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.363357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.363669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.363680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.364034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.364045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.364408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.364418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.364674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.364686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.364897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.364907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.365121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.365131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.365509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.365520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.365896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.365906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.366263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.366276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.366633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.366644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.366842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.366853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.367242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.367254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.367452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.367462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.367779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.367789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.368024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.368035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.368223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.368234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.368576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.368587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.368784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.368795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.369110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.369121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.369499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.369510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.369884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.369894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.522 [2024-07-22 19:43:36.370250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.522 [2024-07-22 19:43:36.370261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.522 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.370627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.370639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.370992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.371004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.371363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.371374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.371726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.371738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.372095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.372106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.372187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.372197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.372388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.372398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.372768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.372779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.373137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.373148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.373506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.373518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.373893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.373903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.374262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.374274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.374643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.374653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.375010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.375021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.375359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.375371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.375615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.375625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.375978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.375988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.376367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.376378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.376704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.376715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.377046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.377057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.377416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.377428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.377830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.377843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.378053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.378065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.378423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.378435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.378669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.378680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.378931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.378942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.379321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.379332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.379691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.379702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.380062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.380076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.380443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.380453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.380836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.523 [2024-07-22 19:43:36.380846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.523 qpair failed and we were unable to recover it. 00:39:17.523 [2024-07-22 19:43:36.381099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.381110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.381527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.381537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.381740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.381753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.382102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.382113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.382490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.382502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.382856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.382867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.383229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.383240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.383602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.383613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.383976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.383987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.384188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.384198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.384560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.384571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.384773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.384784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.385114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.385125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.385528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.385540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.385895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.385906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.386253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.386264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.386622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.386632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.386992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.387003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.387362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.387373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.387737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.387748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.388099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.388110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.388331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.388343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.388652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.388663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.389044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.389055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.389411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.389424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.389826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.389836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.390056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.390067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.390451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.390461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.390646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.390657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.390976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.390987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.391332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.391344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.391731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.391741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.392099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.392110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.392484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.392495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.392689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.392701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.393048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.393059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.393416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.393427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.524 [2024-07-22 19:43:36.393783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.524 [2024-07-22 19:43:36.393794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.524 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.394015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.394027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.394205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.394217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.394621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.394632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.394984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.394995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.395351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.395362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.395560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.395572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.395857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.395868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.396219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.396230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.396576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.396588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.396972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.396982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.397329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.397340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.397662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.397672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.398082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.398093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.398460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.398472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.398827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.398838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.399193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.399210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.399576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.399587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.399966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.399976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.400354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.400364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.400721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.400732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.401132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.401143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.401480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.401491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.401749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.401760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.402119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.402133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.402358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.402369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.402736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.402746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.403100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.403111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.403453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.403464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.403856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.403867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.404225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.404236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.404579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.404591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.404949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.404961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.405312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.405326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.405672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.405683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.405889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.405899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.406262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.406273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.406636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.525 [2024-07-22 19:43:36.406646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.525 qpair failed and we were unable to recover it. 00:39:17.525 [2024-07-22 19:43:36.406824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.406836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.407038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.407048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.407420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.407431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.407796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.407807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.408189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.408199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.408629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.408640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.408995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.409005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.409250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.409261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.409629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.409640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.410004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.410015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.410347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.410357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.410729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.410739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.411140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.411151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.411378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.411388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.411751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.411761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.412076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.412087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.412450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.412461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.412819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.412829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.413179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.413189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.413549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.413560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.413913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.413924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.414290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.414301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.414606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.414617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.414981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.414991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.415348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.415359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.415738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.415748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.416115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.416126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.416501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.416512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.416877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.416887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.416946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.416956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.417293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.417304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.417665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.417676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.418033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.418044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.418367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.418378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.418736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.418746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.419001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.419013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.419389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.419400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.526 qpair failed and we were unable to recover it. 00:39:17.526 [2024-07-22 19:43:36.419840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.526 [2024-07-22 19:43:36.419851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.420219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.420230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.420435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.420446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.420773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.420784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.421011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.421021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.421264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.421275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.421588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.421599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.421776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.421786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.422152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.422163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.422504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.422515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.422787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.422798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.423153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.423164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.423575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.423586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.423941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.423956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.424189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.424202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.424552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.424563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.424909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.424920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.425278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.425289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.425691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.425702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.426058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.426068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.426421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.426432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.426618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.426628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.426825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.426835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.427210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.427221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.427578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.427588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.427954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.427966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.428324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.428334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.428700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.428711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.428766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.428776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.429104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.429115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.429316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.429326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.429658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.527 [2024-07-22 19:43:36.429668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.527 qpair failed and we were unable to recover it. 00:39:17.527 [2024-07-22 19:43:36.430029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.430039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.430444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.430455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.430678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.430689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.430929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.430940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.431184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.431194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.431493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.431504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.431762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.431775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.432149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.432159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.432353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.432364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.432604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.432614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.432966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.432977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.433057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.433067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.433447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.433458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.433816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.433826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.434213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.434223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.434620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.434630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.434928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.434939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.528 [2024-07-22 19:43:36.435274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.528 [2024-07-22 19:43:36.435284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.528 qpair failed and we were unable to recover it. 00:39:17.807 [2024-07-22 19:43:36.435661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.807 [2024-07-22 19:43:36.435673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.807 qpair failed and we were unable to recover it. 00:39:17.807 [2024-07-22 19:43:36.436040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.807 [2024-07-22 19:43:36.436050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.807 qpair failed and we were unable to recover it. 00:39:17.807 [2024-07-22 19:43:36.436411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.807 [2024-07-22 19:43:36.436422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.807 qpair failed and we were unable to recover it. 00:39:17.807 [2024-07-22 19:43:36.436780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.807 [2024-07-22 19:43:36.436790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.807 qpair failed and we were unable to recover it. 00:39:17.807 [2024-07-22 19:43:36.437176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.807 [2024-07-22 19:43:36.437188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.807 qpair failed and we were unable to recover it. 00:39:17.807 [2024-07-22 19:43:36.437559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.807 [2024-07-22 19:43:36.437570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.807 qpair failed and we were unable to recover it. 00:39:17.807 [2024-07-22 19:43:36.437948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.807 [2024-07-22 19:43:36.437958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.807 qpair failed and we were unable to recover it. 00:39:17.807 [2024-07-22 19:43:36.438380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.438391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.438774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.438784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.438986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.438996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.439309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.439319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.439412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.439420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.439640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.439651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.439999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.440009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.440360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.440371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.440723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.440734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.440923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.440933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.441155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.441165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.441462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.441473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.441675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.441685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.442055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.442065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.442437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.442448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.442831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.442841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.443190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.443207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.443554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.443566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.443924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.443934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.444135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.444145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:17.808 [2024-07-22 19:43:36.444508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.444524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:39:17.808 [2024-07-22 19:43:36.444770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.444781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:17.808 [2024-07-22 19:43:36.445143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.445153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.445228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.445238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:17.808 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:17.808 [2024-07-22 19:43:36.445669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.445680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.445904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.445914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.446298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.446309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.446686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.446697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.447052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.447062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.447419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.447429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.447694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.808 [2024-07-22 19:43:36.447704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.808 qpair failed and we were unable to recover it. 00:39:17.808 [2024-07-22 19:43:36.448061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.448071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.448159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.448168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 [2024-07-22 19:43:36.449378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Write completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 Read completed with error (sct=0, sc=8) 00:39:17.809 starting I/O failed 00:39:17.809 [2024-07-22 19:43:36.449884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:17.809 [2024-07-22 19:43:36.450344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.450377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.450667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.450684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.450919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.450937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.451456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.451503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.451918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.451937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.452426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.452472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.452876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.452895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.453421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.453468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.453877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.453896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.454287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.454304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.454721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.454736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.455106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.455122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.455423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.455439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.809 [2024-07-22 19:43:36.455817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.809 [2024-07-22 19:43:36.455832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.809 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.456091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.456105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.456508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.456524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.456841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.456855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.457212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.457228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.457590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.457605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.457799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.457814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.458154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.458169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.458450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.458465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.458817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.458834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.459437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.459455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.459537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.459551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.459779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.459794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.460161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.460176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.460411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.460426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.460825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.460840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.461225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.461240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.461627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.461643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.461908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.461923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.462291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.462306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.462662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.462677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.463043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.463058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.463441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.463457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.463717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.463731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.464121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.464136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.464581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.464596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.464956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.464974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.465267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.465281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.465648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.465662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.466004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.466019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.466281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.466297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.466653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.466669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.467009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.467024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.467315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.467331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.810 [2024-07-22 19:43:36.467562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.810 [2024-07-22 19:43:36.467580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.810 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.467789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.467804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.468190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.468209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.468506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.468521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.468902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.468917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.469167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.469182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.469312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.469327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.469642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.469657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.469839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.469855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.470192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.470210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.470570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.470585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.470951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.470966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.471220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.471236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.471451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.471466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.471811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.471825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.472150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.472164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.472529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.472545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.472842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.472857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.473225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.473241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.473622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.473638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.474013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.474027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.474449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.474464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.474839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.474855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.475119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.475134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.475505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.475521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.475886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.475900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.476294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.476310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.476683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.476699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.477074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.477089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.477459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.477475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.477858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.477874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.478292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.478307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.478525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.478542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.478960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.478974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.479327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.479343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.479768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.479783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.480138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.811 [2024-07-22 19:43:36.480154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.811 qpair failed and we were unable to recover it. 00:39:17.811 [2024-07-22 19:43:36.480479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.480495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.480656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.480672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.480926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.480941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.481136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.481151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.481493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.481508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.481883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.481899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.482256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.482271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.482462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.482477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.482857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.482873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.483065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.483080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.483267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.483280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.483496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.483510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.483923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.483939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.484297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.484312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.484512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.484528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.484862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.484876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.485132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.485146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:17.812 [2024-07-22 19:43:36.485545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.485562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:17.812 [2024-07-22 19:43:36.485930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.485945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.812 [2024-07-22 19:43:36.486304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.486320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:17.812 [2024-07-22 19:43:36.486684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.486704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.487048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.487064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.487438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.487453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.487783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.487801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.487999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.488013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.488393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.488408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.488775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.488790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.489100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.812 [2024-07-22 19:43:36.489115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.812 qpair failed and we were unable to recover it. 00:39:17.812 [2024-07-22 19:43:36.489391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.489406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.489786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.489801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.490194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.490212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.490531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.490545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.490856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.490870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.491272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.491287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.491662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.491677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.491889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.491904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.492287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.492302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.492664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.492679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.492893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.492908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.492992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.493004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388b80 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.493552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.493663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.494109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.494160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.494691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.494801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.495459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.495567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.496090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.496140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.496684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.496793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.497469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.497577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.497960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.498012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.498300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.498343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.498751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.498790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.499257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.499299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.499710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.499750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.500192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.500243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.500560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.500607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.501031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.501072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.501498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.501539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.501943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.501983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.502257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.502298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.502695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.502735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.503162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.503217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.503651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.503697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.813 qpair failed and we were unable to recover it. 00:39:17.813 [2024-07-22 19:43:36.504126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.813 [2024-07-22 19:43:36.504166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.504496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.504543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.504833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.504873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.505275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.505316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.505617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.505662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.506082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.506123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.506446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.506490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.506771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.506811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.507228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.507268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.507688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.507729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.508161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.508230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.508504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.508547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.508989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.509029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.509317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.509359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.509792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.509831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.510239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.510279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.510676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.510716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.511132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.511172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.511605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.511646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.512052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.512092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.512524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.512565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.512986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.513025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.513420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.513461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.513722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.513762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.514213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.514254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.514703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.514743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.515022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.515062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.515504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.515544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.515928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.515967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.516443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.516484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.516853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.516893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.517302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.517342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.517820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.517859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.518185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.518233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.518640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.518680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.519044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.519083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.519404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.519444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.814 [2024-07-22 19:43:36.519755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.814 [2024-07-22 19:43:36.519794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.814 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.520231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.520272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.520699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.520745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.521163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.521215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.521619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.521660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.522090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.522144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.522567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.522608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.523032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.523072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.523326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.523368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.523761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.523800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.524031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.524070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.524497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.524537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.524911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.524951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.525371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.525411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.525640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.525680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.526109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.526149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.526454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.526496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.526921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.526961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.527271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.527311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.527727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.527767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.528033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.528073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.528507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.528549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.528961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.529002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.529423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.529464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.529730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.529770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.530218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.530259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.530685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.530724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.530983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.531023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.531457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.531497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.531933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.531973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.532236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.532276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.532565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.532604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.533027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.533067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.533357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.533397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.533717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.533756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.534174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.534222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.534578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.534619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.535024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.535063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.815 [2024-07-22 19:43:36.535467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.815 [2024-07-22 19:43:36.535507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.815 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.535788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.535828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.536251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.536293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.536731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.536770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.537171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.537226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.537648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.537688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.538110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.538151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.538598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.538638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.538983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.539023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.539460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.539503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.539922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.539962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.540381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.540422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.540834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.540875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 Malloc0 00:39:17.816 [2024-07-22 19:43:36.541342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.541383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.541799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.541840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.816 [2024-07-22 19:43:36.542096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.542137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:17.816 [2024-07-22 19:43:36.542561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.542602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.816 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:17.816 [2024-07-22 19:43:36.543029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.543069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.543503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.543545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.543938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.543978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.544350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.544391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.544819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.544859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.545127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.545167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.545597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.545638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.546002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.546042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.546424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.546464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.546765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.546806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.547194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.547242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.547534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.547577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.547903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.547947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.548383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.548425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.548714] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.816 [2024-07-22 19:43:36.548855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.548951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.549369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.549412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.549797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.549836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.550256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.550297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.550700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.550739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.816 qpair failed and we were unable to recover it. 00:39:17.816 [2024-07-22 19:43:36.551107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.816 [2024-07-22 19:43:36.551145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.551547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.551586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.552008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.552048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.552373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.552417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.552853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.552893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.553321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.553362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.553790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.553837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.554120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.554160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.554441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.554482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.554868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.554908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.555335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.555378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.555625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.555664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.556019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.556058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.556495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.556537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.556943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.556983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.557412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.557453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.817 [2024-07-22 19:43:36.557835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.557875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:17.817 [2024-07-22 19:43:36.558297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.558337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.817 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:17.817 [2024-07-22 19:43:36.558774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.558816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.559276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.559316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.559766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.559807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.560189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.560238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.560679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.560719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.561139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.561179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.561492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.561532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.561917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.561956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.562376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.562417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.562793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.562832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.817 [2024-07-22 19:43:36.563198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.817 [2024-07-22 19:43:36.563247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.817 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.563526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.563566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.564023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.564063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.564366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.564407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.564651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.564691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.565114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.565153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.565581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.565622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.566054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.566094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.566521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.566561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.566860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.566901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.567332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.567374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.567826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.567866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.568251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.568291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.568745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.568784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.569212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.569254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.569545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.569584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.818 [2024-07-22 19:43:36.569856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.569896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:17.818 [2024-07-22 19:43:36.570323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.570365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.818 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:17.818 [2024-07-22 19:43:36.570792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.570832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.571256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.571297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.571722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.571762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.572088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.572128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.572579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.572620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.573007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.573046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.573466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.573507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.573774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.573813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.574254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.574295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.574546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.574586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.575031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.575072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.575345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.575387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.575777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.575817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.576099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.576151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.576539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.576580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.576999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.577039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.577456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.577497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.577904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.577943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.818 [2024-07-22 19:43:36.578356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.818 [2024-07-22 19:43:36.578396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.818 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.578648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.578689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.578979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.579019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.579432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.579472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.579734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.579773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.580191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.580239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.580501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.580541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.580977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.581017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.581442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.581481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.819 [2024-07-22 19:43:36.581950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.581989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:17.819 [2024-07-22 19:43:36.582289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.582330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:17.819 [2024-07-22 19:43:36.582757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.582796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.583241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.583283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.583573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.583613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.584039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.584080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.584534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.584575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.584988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.585027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.585544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.585585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.586006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.586047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.586474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.586513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.586880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.586919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.587216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.587258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.587623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.587663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.588065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.588104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.588529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.588570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.588952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:17.819 [2024-07-22 19:43:36.588992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.589037] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:17.819 [2024-07-22 19:43:36.600315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.819 [2024-07-22 19:43:36.600505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.819 [2024-07-22 19:43:36.600559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.819 [2024-07-22 19:43:36.600591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.819 [2024-07-22 19:43:36.600621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.819 [2024-07-22 19:43:36.600682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.819 19:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3185741 00:39:17.819 [2024-07-22 19:43:36.610158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.819 [2024-07-22 19:43:36.610298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.819 [2024-07-22 19:43:36.610333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.819 [2024-07-22 19:43:36.610354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.819 [2024-07-22 19:43:36.610370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.819 [2024-07-22 19:43:36.610407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.819 qpair failed and we were unable to recover it. 00:39:17.819 [2024-07-22 19:43:36.620178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.819 [2024-07-22 19:43:36.620316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.819 [2024-07-22 19:43:36.620353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.819 [2024-07-22 19:43:36.620374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.819 [2024-07-22 19:43:36.620389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.819 [2024-07-22 19:43:36.620426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.630139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.630259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.630285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.630298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.630308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.630334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.640146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.640249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.640272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.640283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.640292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.640319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.650139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.650237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.650262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.650274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.650283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.650305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.660215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.660314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.660337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.660349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.660358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.660380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.670241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.670343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.670367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.670378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.670388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.670412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.680297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.680411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.680435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.680448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.680457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.680480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.690300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.690455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.690479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.690491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.690500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.690525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.700327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.700433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.700458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.700470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.700479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.700501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.710413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.710556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.710583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.710595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.710605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.710628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.720437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.720542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.720567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.720578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.720587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.720609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.730464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.730563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.730589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.730608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.730620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.730643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.740513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.740609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.740634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.740646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.740655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.740678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:17.820 [2024-07-22 19:43:36.750498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:17.820 [2024-07-22 19:43:36.750604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:17.820 [2024-07-22 19:43:36.750629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:17.820 [2024-07-22 19:43:36.750641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:17.820 [2024-07-22 19:43:36.750650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:17.820 [2024-07-22 19:43:36.750674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:17.820 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.760470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.760584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.760610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.760623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.760633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.760656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.770616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.770718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.770743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.770755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.770764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.770787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.780569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.780675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.780702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.780714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.780724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.780747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.790635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.790745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.790772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.790784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.790793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.790817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.800709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.800815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.800843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.800855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.800865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.800890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.810770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.810886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.810925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.810940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.810951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.810981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.820670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.820783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.820815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.820833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.820844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.820872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.830774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.830988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.831028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.831043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.831053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.831084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.840869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.840987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.841020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.841034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.841044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.841072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.850851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.850965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.850996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.851009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.851018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.084 [2024-07-22 19:43:36.851046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.084 qpair failed and we were unable to recover it. 00:39:18.084 [2024-07-22 19:43:36.860899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.084 [2024-07-22 19:43:36.861018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.084 [2024-07-22 19:43:36.861052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.084 [2024-07-22 19:43:36.861065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.084 [2024-07-22 19:43:36.861074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.861103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.870974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.871117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.871154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.871172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.871182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.871220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.880971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.881097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.881132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.881145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.881155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.881184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.891017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.891139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.891174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.891187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.891196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.891235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.900990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.901116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.901152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.901165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.901175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.901214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.911078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.911223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.911260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.911278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.911288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.911320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.921116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.921253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.921289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.921303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.921313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.921342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.931126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.931266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.931302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.931315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.931325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.931354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.941091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.941229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.941265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.941279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.941289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.941319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.951164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.951343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.951378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.951394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.951403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.951432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.961257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.961386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.961423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.961440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.961451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.961481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.971261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.971430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.971466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.971480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.971490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.971521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.981184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.981319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.981354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.981367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.981377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.981406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.085 [2024-07-22 19:43:36.991319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.085 [2024-07-22 19:43:36.991444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.085 [2024-07-22 19:43:36.991480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.085 [2024-07-22 19:43:36.991494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.085 [2024-07-22 19:43:36.991504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.085 [2024-07-22 19:43:36.991534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.085 qpair failed and we were unable to recover it. 00:39:18.086 [2024-07-22 19:43:37.001327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.086 [2024-07-22 19:43:37.001456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.086 [2024-07-22 19:43:37.001497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.086 [2024-07-22 19:43:37.001512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.086 [2024-07-22 19:43:37.001522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.086 [2024-07-22 19:43:37.001553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.086 qpair failed and we were unable to recover it. 00:39:18.086 [2024-07-22 19:43:37.011307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.086 [2024-07-22 19:43:37.011436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.086 [2024-07-22 19:43:37.011471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.086 [2024-07-22 19:43:37.011485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.086 [2024-07-22 19:43:37.011495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.086 [2024-07-22 19:43:37.011524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.086 qpair failed and we were unable to recover it. 00:39:18.086 [2024-07-22 19:43:37.021403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.086 [2024-07-22 19:43:37.021524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.086 [2024-07-22 19:43:37.021559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.086 [2024-07-22 19:43:37.021573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.086 [2024-07-22 19:43:37.021583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.086 [2024-07-22 19:43:37.021613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.086 qpair failed and we were unable to recover it. 00:39:18.086 [2024-07-22 19:43:37.031437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.086 [2024-07-22 19:43:37.031607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.086 [2024-07-22 19:43:37.031643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.086 [2024-07-22 19:43:37.031657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.086 [2024-07-22 19:43:37.031667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.086 [2024-07-22 19:43:37.031695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.086 qpair failed and we were unable to recover it. 00:39:18.348 [2024-07-22 19:43:37.041438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.348 [2024-07-22 19:43:37.041614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.348 [2024-07-22 19:43:37.041650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.348 [2024-07-22 19:43:37.041664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.041673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.041712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.051465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.051586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.051621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.051635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.051646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.051675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.061483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.061605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.061640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.061654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.061663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.061693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.071515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.071640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.071676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.071690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.071700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.071729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.081483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.081621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.081657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.081670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.081680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.081709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.091572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.091696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.091736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.091748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.091758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.091787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.101630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.101761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.101796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.101809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.101819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.101848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.111616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.111745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.111781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.111794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.111804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.111833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.121611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.121758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.121793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.121806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.121816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.121845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.131741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.131868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.131904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.131918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.131933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.131962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.141855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.141999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.142046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.142063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.142073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.142110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.151830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.151967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.152005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.152020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.152029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.152061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.161794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.161919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.161955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.161969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.161978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.349 [2024-07-22 19:43:37.162009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.349 qpair failed and we were unable to recover it. 00:39:18.349 [2024-07-22 19:43:37.171858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.349 [2024-07-22 19:43:37.171998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.349 [2024-07-22 19:43:37.172034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.349 [2024-07-22 19:43:37.172047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.349 [2024-07-22 19:43:37.172057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.172089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.181890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.182021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.182056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.182069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.182079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.182108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.191853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.192036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.192073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.192087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.192096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.192125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.201899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.202031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.202066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.202079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.202088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.202118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.211948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.212128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.212163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.212176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.212186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.212230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.222008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.222135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.222170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.222189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.222198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.222242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.232026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.232152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.232187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.232208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.232219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.232249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.242059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.242187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.242235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.242249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.242265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.242295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.252051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.252192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.252235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.252249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.252258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.252287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.262312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.262435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.262469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.262483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.262492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.262521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.272120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.272255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.272290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.272304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.272313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.272342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.282137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.282281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.282315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.282328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.282338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.282368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.350 [2024-07-22 19:43:37.292216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.350 [2024-07-22 19:43:37.292357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.350 [2024-07-22 19:43:37.292392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.350 [2024-07-22 19:43:37.292405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.350 [2024-07-22 19:43:37.292414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.350 [2024-07-22 19:43:37.292443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.350 qpair failed and we were unable to recover it. 00:39:18.613 [2024-07-22 19:43:37.302249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.613 [2024-07-22 19:43:37.302361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.613 [2024-07-22 19:43:37.302397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.613 [2024-07-22 19:43:37.302410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.613 [2024-07-22 19:43:37.302420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.613 [2024-07-22 19:43:37.302449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.613 qpair failed and we were unable to recover it. 00:39:18.613 [2024-07-22 19:43:37.312326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.613 [2024-07-22 19:43:37.312452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.613 [2024-07-22 19:43:37.312487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.613 [2024-07-22 19:43:37.312505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.613 [2024-07-22 19:43:37.312515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.613 [2024-07-22 19:43:37.312545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.613 qpair failed and we were unable to recover it. 00:39:18.613 [2024-07-22 19:43:37.322306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.613 [2024-07-22 19:43:37.322429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.613 [2024-07-22 19:43:37.322466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.613 [2024-07-22 19:43:37.322479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.613 [2024-07-22 19:43:37.322488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.613 [2024-07-22 19:43:37.322517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.613 qpair failed and we were unable to recover it. 00:39:18.613 [2024-07-22 19:43:37.332327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.613 [2024-07-22 19:43:37.332442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.613 [2024-07-22 19:43:37.332477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.613 [2024-07-22 19:43:37.332491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.332501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.332530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.342373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.342500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.342537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.342550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.342559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.342589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.352291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.352419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.352454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.352468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.352477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.352506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.362371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.362490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.362526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.362539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.362548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.362576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.372428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.372551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.372586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.372599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.372608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.372638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.382471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.382599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.382634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.382646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.382656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.382684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.392496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.392618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.392653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.392666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.392675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.392704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.402562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.402687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.402729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.402742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.402751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.402781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.412577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.412709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.412744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.412756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.412765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.412794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.422580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.422702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.422737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.422751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.422760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.422789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.432692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.432823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.432857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.432871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.432880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.432910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.442704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.442829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.442864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.442878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.442887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.442920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.452675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.452858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.452905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.452921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.452932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.452968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.614 [2024-07-22 19:43:37.462723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.614 [2024-07-22 19:43:37.462849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.614 [2024-07-22 19:43:37.462897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.614 [2024-07-22 19:43:37.462913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.614 [2024-07-22 19:43:37.462924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.614 [2024-07-22 19:43:37.462966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.614 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.472745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.472882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.472928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.472945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.472955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.472992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.482807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.482950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.482997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.483013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.483025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.483060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.492780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.492905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.492948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.492962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.492972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.493004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.502851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.502980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.503024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.503038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.503047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.503078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.512875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.512998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.513034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.513047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.513057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.513087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.523013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.523135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.523170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.523183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.523192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.523231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.532905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.533035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.533070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.533084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.533097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.533127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.542976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.543096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.543132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.543146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.543155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.543183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.552950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.553093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.553129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.553142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.553153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.553184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.615 [2024-07-22 19:43:37.562989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.615 [2024-07-22 19:43:37.563117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.615 [2024-07-22 19:43:37.563152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.615 [2024-07-22 19:43:37.563166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.615 [2024-07-22 19:43:37.563175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.615 [2024-07-22 19:43:37.563213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.615 qpair failed and we were unable to recover it. 00:39:18.878 [2024-07-22 19:43:37.573121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.878 [2024-07-22 19:43:37.573265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.878 [2024-07-22 19:43:37.573300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.878 [2024-07-22 19:43:37.573314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.878 [2024-07-22 19:43:37.573323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.878 [2024-07-22 19:43:37.573355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.878 qpair failed and we were unable to recover it. 00:39:18.878 [2024-07-22 19:43:37.583090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.878 [2024-07-22 19:43:37.583227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.878 [2024-07-22 19:43:37.583263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.878 [2024-07-22 19:43:37.583278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.878 [2024-07-22 19:43:37.583287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.878 [2024-07-22 19:43:37.583316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.878 qpair failed and we were unable to recover it. 00:39:18.878 [2024-07-22 19:43:37.593106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.878 [2024-07-22 19:43:37.593239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.878 [2024-07-22 19:43:37.593276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.878 [2024-07-22 19:43:37.593289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.878 [2024-07-22 19:43:37.593298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.878 [2024-07-22 19:43:37.593327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.878 qpair failed and we were unable to recover it. 00:39:18.878 [2024-07-22 19:43:37.603136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.878 [2024-07-22 19:43:37.603317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.878 [2024-07-22 19:43:37.603353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.878 [2024-07-22 19:43:37.603366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.878 [2024-07-22 19:43:37.603375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.878 [2024-07-22 19:43:37.603404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.878 qpair failed and we were unable to recover it. 00:39:18.878 [2024-07-22 19:43:37.613209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.878 [2024-07-22 19:43:37.613330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.878 [2024-07-22 19:43:37.613366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.878 [2024-07-22 19:43:37.613379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.878 [2024-07-22 19:43:37.613389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.878 [2024-07-22 19:43:37.613419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.878 qpair failed and we were unable to recover it. 00:39:18.878 [2024-07-22 19:43:37.623187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.878 [2024-07-22 19:43:37.623313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.878 [2024-07-22 19:43:37.623348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.878 [2024-07-22 19:43:37.623362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.878 [2024-07-22 19:43:37.623377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.878 [2024-07-22 19:43:37.623407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.878 qpair failed and we were unable to recover it. 00:39:18.878 [2024-07-22 19:43:37.633211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.878 [2024-07-22 19:43:37.633344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.878 [2024-07-22 19:43:37.633379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.878 [2024-07-22 19:43:37.633392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.878 [2024-07-22 19:43:37.633401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.878 [2024-07-22 19:43:37.633430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.878 qpair failed and we were unable to recover it. 00:39:18.878 [2024-07-22 19:43:37.643301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.878 [2024-07-22 19:43:37.643446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.878 [2024-07-22 19:43:37.643481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.643495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.643504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.643536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.653328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.653449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.653484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.653498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.653508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.653536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.663337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.663466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.663501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.663515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.663524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.663554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.673340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.673463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.673499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.673512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.673521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.673550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.683383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.683510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.683545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.683559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.683568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.683597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.693410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.693525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.693560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.693574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.693583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.693613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.703428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.703563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.703598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.703612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.703621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.703651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.713497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.713624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.713659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.713677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.713686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.713716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.723476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.723600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.723635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.723648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.723658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.723687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.733546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.733672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.733707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.733722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.733731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.733760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.743570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.743703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.743738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.743751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.743761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.743789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.753614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.753740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.753777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.753791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.753801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.753839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.763653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.763785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.879 [2024-07-22 19:43:37.763820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.879 [2024-07-22 19:43:37.763833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.879 [2024-07-22 19:43:37.763844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.879 [2024-07-22 19:43:37.763874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.879 qpair failed and we were unable to recover it. 00:39:18.879 [2024-07-22 19:43:37.773703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.879 [2024-07-22 19:43:37.773831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.880 [2024-07-22 19:43:37.773867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.880 [2024-07-22 19:43:37.773881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.880 [2024-07-22 19:43:37.773891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.880 [2024-07-22 19:43:37.773921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.880 qpair failed and we were unable to recover it. 00:39:18.880 [2024-07-22 19:43:37.783700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.880 [2024-07-22 19:43:37.783859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.880 [2024-07-22 19:43:37.783908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.880 [2024-07-22 19:43:37.783925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.880 [2024-07-22 19:43:37.783936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.880 [2024-07-22 19:43:37.783973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.880 qpair failed and we were unable to recover it. 00:39:18.880 [2024-07-22 19:43:37.793709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.880 [2024-07-22 19:43:37.793850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.880 [2024-07-22 19:43:37.793889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.880 [2024-07-22 19:43:37.793903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.880 [2024-07-22 19:43:37.793912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.880 [2024-07-22 19:43:37.793944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.880 qpair failed and we were unable to recover it. 00:39:18.880 [2024-07-22 19:43:37.803781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.880 [2024-07-22 19:43:37.803920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.880 [2024-07-22 19:43:37.803973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.880 [2024-07-22 19:43:37.803989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.880 [2024-07-22 19:43:37.804000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.880 [2024-07-22 19:43:37.804037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.880 qpair failed and we were unable to recover it. 00:39:18.880 [2024-07-22 19:43:37.813819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.880 [2024-07-22 19:43:37.813944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.880 [2024-07-22 19:43:37.813983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.880 [2024-07-22 19:43:37.813997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.880 [2024-07-22 19:43:37.814007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.880 [2024-07-22 19:43:37.814038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.880 qpair failed and we were unable to recover it. 00:39:18.880 [2024-07-22 19:43:37.823861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:18.880 [2024-07-22 19:43:37.824063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:18.880 [2024-07-22 19:43:37.824099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:18.880 [2024-07-22 19:43:37.824111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:18.880 [2024-07-22 19:43:37.824122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:18.880 [2024-07-22 19:43:37.824152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:18.880 qpair failed and we were unable to recover it. 00:39:19.143 [2024-07-22 19:43:37.833862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.143 [2024-07-22 19:43:37.833996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.143 [2024-07-22 19:43:37.834032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.143 [2024-07-22 19:43:37.834045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.143 [2024-07-22 19:43:37.834055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.143 [2024-07-22 19:43:37.834084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.143 qpair failed and we were unable to recover it. 00:39:19.143 [2024-07-22 19:43:37.843826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.143 [2024-07-22 19:43:37.843953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.143 [2024-07-22 19:43:37.843988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.143 [2024-07-22 19:43:37.844002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.143 [2024-07-22 19:43:37.844011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.143 [2024-07-22 19:43:37.844046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.143 qpair failed and we were unable to recover it. 00:39:19.143 [2024-07-22 19:43:37.853893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.143 [2024-07-22 19:43:37.854010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.143 [2024-07-22 19:43:37.854046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.143 [2024-07-22 19:43:37.854059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.143 [2024-07-22 19:43:37.854069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.143 [2024-07-22 19:43:37.854098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.143 qpair failed and we were unable to recover it. 00:39:19.143 [2024-07-22 19:43:37.864007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.143 [2024-07-22 19:43:37.864126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.143 [2024-07-22 19:43:37.864163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.143 [2024-07-22 19:43:37.864176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.143 [2024-07-22 19:43:37.864186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.143 [2024-07-22 19:43:37.864223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.143 qpair failed and we were unable to recover it. 00:39:19.143 [2024-07-22 19:43:37.873990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.143 [2024-07-22 19:43:37.874116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.143 [2024-07-22 19:43:37.874151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.143 [2024-07-22 19:43:37.874164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.143 [2024-07-22 19:43:37.874173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.143 [2024-07-22 19:43:37.874212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.143 qpair failed and we were unable to recover it. 00:39:19.143 [2024-07-22 19:43:37.884002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.143 [2024-07-22 19:43:37.884134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.143 [2024-07-22 19:43:37.884169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.143 [2024-07-22 19:43:37.884181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.143 [2024-07-22 19:43:37.884190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.143 [2024-07-22 19:43:37.884229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.143 qpair failed and we were unable to recover it. 00:39:19.143 [2024-07-22 19:43:37.894027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.143 [2024-07-22 19:43:37.894165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.143 [2024-07-22 19:43:37.894215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.143 [2024-07-22 19:43:37.894229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.143 [2024-07-22 19:43:37.894238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.143 [2024-07-22 19:43:37.894271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.143 qpair failed and we were unable to recover it. 00:39:19.143 [2024-07-22 19:43:37.904102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.143 [2024-07-22 19:43:37.904233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.143 [2024-07-22 19:43:37.904269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.143 [2024-07-22 19:43:37.904282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.143 [2024-07-22 19:43:37.904291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.143 [2024-07-22 19:43:37.904320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.143 qpair failed and we were unable to recover it. 00:39:19.143 [2024-07-22 19:43:37.914145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.143 [2024-07-22 19:43:37.914307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.143 [2024-07-22 19:43:37.914342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.143 [2024-07-22 19:43:37.914356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:37.914365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:37.914393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:37.924123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:37.924257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:37.924292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:37.924306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:37.924315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:37.924344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:37.934213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:37.934340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:37.934375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:37.934388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:37.934402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:37.934433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:37.944194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:37.944331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:37.944367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:37.944380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:37.944390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:37.944419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:37.954401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:37.954527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:37.954562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:37.954576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:37.954585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:37.954615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:37.964265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:37.964387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:37.964422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:37.964436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:37.964445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:37.964474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:37.974293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:37.974412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:37.974448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:37.974461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:37.974470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:37.974500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:37.984338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:37.984548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:37.984583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:37.984596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:37.984606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:37.984635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:37.994315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:37.994443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:37.994478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:37.994492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:37.994503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:37.994531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:38.004388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:38.004516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:38.004550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:38.004564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:38.004573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:38.004601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:38.014414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:38.014539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:38.014575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:38.014597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:38.014607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:38.014635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:38.024460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:38.024581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:38.024616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:38.024630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:38.024645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:38.024674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:38.034488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:38.034608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:38.034644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:38.034658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.144 [2024-07-22 19:43:38.034667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.144 [2024-07-22 19:43:38.034696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.144 qpair failed and we were unable to recover it. 00:39:19.144 [2024-07-22 19:43:38.044522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.144 [2024-07-22 19:43:38.044649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.144 [2024-07-22 19:43:38.044685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.144 [2024-07-22 19:43:38.044698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.145 [2024-07-22 19:43:38.044707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.145 [2024-07-22 19:43:38.044736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.145 qpair failed and we were unable to recover it. 00:39:19.145 [2024-07-22 19:43:38.054547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.145 [2024-07-22 19:43:38.054689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.145 [2024-07-22 19:43:38.054725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.145 [2024-07-22 19:43:38.054739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.145 [2024-07-22 19:43:38.054749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.145 [2024-07-22 19:43:38.054780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.145 qpair failed and we were unable to recover it. 00:39:19.145 [2024-07-22 19:43:38.064590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.145 [2024-07-22 19:43:38.064715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.145 [2024-07-22 19:43:38.064750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.145 [2024-07-22 19:43:38.064763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.145 [2024-07-22 19:43:38.064772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.145 [2024-07-22 19:43:38.064802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.145 qpair failed and we were unable to recover it. 00:39:19.145 [2024-07-22 19:43:38.074607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.145 [2024-07-22 19:43:38.074761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.145 [2024-07-22 19:43:38.074798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.145 [2024-07-22 19:43:38.074811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.145 [2024-07-22 19:43:38.074820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.145 [2024-07-22 19:43:38.074850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.145 qpair failed and we were unable to recover it. 00:39:19.145 [2024-07-22 19:43:38.084638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.145 [2024-07-22 19:43:38.084774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.145 [2024-07-22 19:43:38.084820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.145 [2024-07-22 19:43:38.084836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.145 [2024-07-22 19:43:38.084847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.145 [2024-07-22 19:43:38.084882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.145 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.094698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.094819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.094859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.094873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.094883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.094916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.104825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.104958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.104994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.105008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.105018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.105048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.114726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.114863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.114909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.114933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.114944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.114980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.124758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.124898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.124944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.124960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.124971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.125006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.134810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.134954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.134993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.135007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.135017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.135051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.144822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.144951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.144987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.145000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.145010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.145040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.155040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.155164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.155210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.155225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.155234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.155265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.164909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.165044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.165079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.165092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.165102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.165132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.174924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.175048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.175084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.175098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.175110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.175141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.184952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.185072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.185108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.185122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.185131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.185160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.195011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.408 [2024-07-22 19:43:38.195134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.408 [2024-07-22 19:43:38.195169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.408 [2024-07-22 19:43:38.195183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.408 [2024-07-22 19:43:38.195193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.408 [2024-07-22 19:43:38.195234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.408 qpair failed and we were unable to recover it. 00:39:19.408 [2024-07-22 19:43:38.205040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.205172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.205218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.205233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.205242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.205272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.215052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.215168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.215213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.215227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.215236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.215266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.225101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.225260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.225297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.225315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.225325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.225355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.235120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.235249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.235285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.235298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.235308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.235338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.245162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.245290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.245325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.245338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.245348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.245382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.255237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.255355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.255390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.255403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.255412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.255443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.265229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.265353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.265389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.265404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.265413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.265442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.275283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.275447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.275482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.275494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.275504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.275533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.285234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.285381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.285418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.285431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.285441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.285473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.295311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.295428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.295468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.295481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.295491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.295521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.305299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.305422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.305458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.305473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.305484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.305514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.315357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.315486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.315522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.315535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.409 [2024-07-22 19:43:38.315544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.409 [2024-07-22 19:43:38.315573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.409 qpair failed and we were unable to recover it. 00:39:19.409 [2024-07-22 19:43:38.325352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.409 [2024-07-22 19:43:38.325474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.409 [2024-07-22 19:43:38.325508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.409 [2024-07-22 19:43:38.325522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.410 [2024-07-22 19:43:38.325531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.410 [2024-07-22 19:43:38.325560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.410 qpair failed and we were unable to recover it. 00:39:19.410 [2024-07-22 19:43:38.335442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.410 [2024-07-22 19:43:38.335574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.410 [2024-07-22 19:43:38.335609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.410 [2024-07-22 19:43:38.335622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.410 [2024-07-22 19:43:38.335632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.410 [2024-07-22 19:43:38.335665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.410 qpair failed and we were unable to recover it. 00:39:19.410 [2024-07-22 19:43:38.345467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.410 [2024-07-22 19:43:38.345608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.410 [2024-07-22 19:43:38.345642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.410 [2024-07-22 19:43:38.345655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.410 [2024-07-22 19:43:38.345664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.410 [2024-07-22 19:43:38.345693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.410 qpair failed and we were unable to recover it. 00:39:19.410 [2024-07-22 19:43:38.355490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.410 [2024-07-22 19:43:38.355662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.410 [2024-07-22 19:43:38.355702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.410 [2024-07-22 19:43:38.355715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.410 [2024-07-22 19:43:38.355725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.410 [2024-07-22 19:43:38.355755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.410 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.365573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.365711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.365747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.365760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.365769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.365799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.375558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.375694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.375729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.375743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.375752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.375782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.385573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.385709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.385745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.385758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.385768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.385797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.395571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.395693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.395727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.395741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.395751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.395780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.405643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.405775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.405810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.405823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.405833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.405861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.415710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.415833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.415868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.415882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.415891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.415921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.425690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.425834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.425881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.425898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.425915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.425952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.435746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.435886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.435933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.435950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.435961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.435997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.445762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.445895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.445942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.445959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.445970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.446005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.455769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.455895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.455933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.455947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.455957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.455989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.465819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.465941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.465976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.465989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.465999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.466028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.475794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.673 [2024-07-22 19:43:38.475918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.673 [2024-07-22 19:43:38.475955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.673 [2024-07-22 19:43:38.475969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.673 [2024-07-22 19:43:38.475978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.673 [2024-07-22 19:43:38.476008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.673 qpair failed and we were unable to recover it. 00:39:19.673 [2024-07-22 19:43:38.485841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.485960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.485995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.486009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.486019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.486048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.495950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.496080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.496116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.496129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.496139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.496168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.505899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.506012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.506047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.506062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.506072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.506100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.515956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.516080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.516116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.516135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.516144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.516173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.525940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.526059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.526095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.526109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.526129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.526159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.536003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.536147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.536182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.536195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.536213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.536243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.546041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.546161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.546197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.546218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.546227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.546257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.556039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.556164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.556208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.556222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.556233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.556263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.566119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.566250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.566286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.566299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.566309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.566339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.576145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.576287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.576322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.576335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.576345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.576380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.586176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.586301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.586337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.586351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.586362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.586390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.596194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.596331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.596367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.596381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.596390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.596420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.606122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.606245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.606280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.674 [2024-07-22 19:43:38.606300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.674 [2024-07-22 19:43:38.606310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.674 [2024-07-22 19:43:38.606340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.674 qpair failed and we were unable to recover it. 00:39:19.674 [2024-07-22 19:43:38.616252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.674 [2024-07-22 19:43:38.616378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.674 [2024-07-22 19:43:38.616415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.675 [2024-07-22 19:43:38.616428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.675 [2024-07-22 19:43:38.616438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.675 [2024-07-22 19:43:38.616468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.675 qpair failed and we were unable to recover it. 00:39:19.937 [2024-07-22 19:43:38.626269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.937 [2024-07-22 19:43:38.626392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.937 [2024-07-22 19:43:38.626429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.937 [2024-07-22 19:43:38.626442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.937 [2024-07-22 19:43:38.626452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.937 [2024-07-22 19:43:38.626482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.937 qpair failed and we were unable to recover it. 00:39:19.937 [2024-07-22 19:43:38.636355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.937 [2024-07-22 19:43:38.636498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.937 [2024-07-22 19:43:38.636533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.937 [2024-07-22 19:43:38.636547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.937 [2024-07-22 19:43:38.636558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.937 [2024-07-22 19:43:38.636586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.937 qpair failed and we were unable to recover it. 00:39:19.937 [2024-07-22 19:43:38.646421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.937 [2024-07-22 19:43:38.646554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.937 [2024-07-22 19:43:38.646590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.937 [2024-07-22 19:43:38.646603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.937 [2024-07-22 19:43:38.646612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.937 [2024-07-22 19:43:38.646644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.937 qpair failed and we were unable to recover it. 00:39:19.937 [2024-07-22 19:43:38.656369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.937 [2024-07-22 19:43:38.656507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.937 [2024-07-22 19:43:38.656542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.937 [2024-07-22 19:43:38.656555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.937 [2024-07-22 19:43:38.656564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.937 [2024-07-22 19:43:38.656595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.666383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.666498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.666534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.666547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.666557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.666586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.676403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.676532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.676567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.676579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.676589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.676618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.686475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.686603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.686639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.686652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.686661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.686690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.696570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.696695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.696735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.696748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.696758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.696788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.706523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.706651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.706685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.706698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.706708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.706736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.716509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.716632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.716667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.716680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.716690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.716718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.726596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.726720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.726755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.726768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.726778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.726809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.736618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.736738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.736772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.736785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.736795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.736829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.746674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.746793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.746828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.746842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.746851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.746879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.756677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.756803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.756838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.756852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.756861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.756889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.766672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.766791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.766828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.766841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.766851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.766881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.776711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.776832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.776868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.776882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.938 [2024-07-22 19:43:38.776891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.938 [2024-07-22 19:43:38.776923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.938 qpair failed and we were unable to recover it. 00:39:19.938 [2024-07-22 19:43:38.786785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.938 [2024-07-22 19:43:38.786918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.938 [2024-07-22 19:43:38.786979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.938 [2024-07-22 19:43:38.786995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.787006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.787042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.796783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.796904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.796939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.796953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.796962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.796993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.806576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.806699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.806733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.806746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.806756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.806787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.816823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.816935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.816966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.816979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.816989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.817015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.826830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.826937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.826968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.826980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.826994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.827021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.836936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.837074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.837104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.837116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.837125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.837151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.846660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.846759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.846788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.846801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.846810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.846835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.856922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.857032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.857061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.857073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.857085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.857109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.866980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.867083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.867111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.867123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.867133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.867158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.876923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.877026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.877053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.877066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.877076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.877100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:19.939 [2024-07-22 19:43:38.886763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:19.939 [2024-07-22 19:43:38.886859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:19.939 [2024-07-22 19:43:38.886883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:19.939 [2024-07-22 19:43:38.886895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:19.939 [2024-07-22 19:43:38.886904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:19.939 [2024-07-22 19:43:38.886927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:19.939 qpair failed and we were unable to recover it. 00:39:20.203 [2024-07-22 19:43:38.896967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.203 [2024-07-22 19:43:38.897072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.203 [2024-07-22 19:43:38.897096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.203 [2024-07-22 19:43:38.897108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.203 [2024-07-22 19:43:38.897117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.203 [2024-07-22 19:43:38.897140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.203 qpair failed and we were unable to recover it. 00:39:20.203 [2024-07-22 19:43:38.907007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.203 [2024-07-22 19:43:38.907129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.203 [2024-07-22 19:43:38.907153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.203 [2024-07-22 19:43:38.907165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.203 [2024-07-22 19:43:38.907174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.203 [2024-07-22 19:43:38.907196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.203 qpair failed and we were unable to recover it. 00:39:20.203 [2024-07-22 19:43:38.917053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.203 [2024-07-22 19:43:38.917149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.203 [2024-07-22 19:43:38.917173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.203 [2024-07-22 19:43:38.917188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.203 [2024-07-22 19:43:38.917197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.203 [2024-07-22 19:43:38.917229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.203 qpair failed and we were unable to recover it. 00:39:20.203 [2024-07-22 19:43:38.926895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.203 [2024-07-22 19:43:38.926984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.203 [2024-07-22 19:43:38.927008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.203 [2024-07-22 19:43:38.927020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.203 [2024-07-22 19:43:38.927029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.203 [2024-07-22 19:43:38.927051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.203 qpair failed and we were unable to recover it. 00:39:20.203 [2024-07-22 19:43:38.937038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.203 [2024-07-22 19:43:38.937149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.203 [2024-07-22 19:43:38.937173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.203 [2024-07-22 19:43:38.937185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.203 [2024-07-22 19:43:38.937194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.203 [2024-07-22 19:43:38.937221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.203 qpair failed and we were unable to recover it. 00:39:20.203 [2024-07-22 19:43:38.947123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.203 [2024-07-22 19:43:38.947224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.203 [2024-07-22 19:43:38.947248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.203 [2024-07-22 19:43:38.947259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.203 [2024-07-22 19:43:38.947268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.203 [2024-07-22 19:43:38.947290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.203 qpair failed and we were unable to recover it. 00:39:20.203 [2024-07-22 19:43:38.957106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.203 [2024-07-22 19:43:38.957248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.203 [2024-07-22 19:43:38.957271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.203 [2024-07-22 19:43:38.957282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.203 [2024-07-22 19:43:38.957291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.203 [2024-07-22 19:43:38.957315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.203 qpair failed and we were unable to recover it. 00:39:20.203 [2024-07-22 19:43:38.966980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.203 [2024-07-22 19:43:38.967065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.203 [2024-07-22 19:43:38.967086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.203 [2024-07-22 19:43:38.967098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.203 [2024-07-22 19:43:38.967107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.203 [2024-07-22 19:43:38.967129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.203 qpair failed and we were unable to recover it. 00:39:20.203 [2024-07-22 19:43:38.977174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:38.977274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:38.977297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:38.977308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:38.977317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:38.977339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:38.987318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:38.987412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:38.987435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:38.987446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:38.987454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:38.987476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:38.997226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:38.997319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:38.997341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:38.997352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:38.997361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:38.997383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.007112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.007218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.007241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.007256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.007265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.007287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.017392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.017486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.017508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.017519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.017529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.017550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.027350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.027438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.027459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.027471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.027480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.027501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.037359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.037448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.037469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.037481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.037490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.037517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.047173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.047263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.047285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.047297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.047305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.047326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.057436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.057525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.057547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.057558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.057567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.057588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.067464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.067563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.067584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.067595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.067604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.067625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.077403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.077494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.077516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.077527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.077536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.077558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.087288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.087374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.087395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.087407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.087416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.087437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.097538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.204 [2024-07-22 19:43:39.097630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.204 [2024-07-22 19:43:39.097655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.204 [2024-07-22 19:43:39.097667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.204 [2024-07-22 19:43:39.097676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.204 [2024-07-22 19:43:39.097697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.204 qpair failed and we were unable to recover it. 00:39:20.204 [2024-07-22 19:43:39.107539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.205 [2024-07-22 19:43:39.107635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.205 [2024-07-22 19:43:39.107656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.205 [2024-07-22 19:43:39.107668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.205 [2024-07-22 19:43:39.107677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.205 [2024-07-22 19:43:39.107697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.205 qpair failed and we were unable to recover it. 00:39:20.205 [2024-07-22 19:43:39.117543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.205 [2024-07-22 19:43:39.117644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.205 [2024-07-22 19:43:39.117666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.205 [2024-07-22 19:43:39.117677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.205 [2024-07-22 19:43:39.117685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.205 [2024-07-22 19:43:39.117706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.205 qpair failed and we were unable to recover it. 00:39:20.205 [2024-07-22 19:43:39.127402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.205 [2024-07-22 19:43:39.127491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.205 [2024-07-22 19:43:39.127512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.205 [2024-07-22 19:43:39.127524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.205 [2024-07-22 19:43:39.127532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.205 [2024-07-22 19:43:39.127553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.205 qpair failed and we were unable to recover it. 00:39:20.205 [2024-07-22 19:43:39.137469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.205 [2024-07-22 19:43:39.137622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.205 [2024-07-22 19:43:39.137643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.205 [2024-07-22 19:43:39.137654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.205 [2024-07-22 19:43:39.137662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.205 [2024-07-22 19:43:39.137686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.205 qpair failed and we were unable to recover it. 00:39:20.205 [2024-07-22 19:43:39.147743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.205 [2024-07-22 19:43:39.147834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.205 [2024-07-22 19:43:39.147856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.205 [2024-07-22 19:43:39.147867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.205 [2024-07-22 19:43:39.147876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.205 [2024-07-22 19:43:39.147897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.205 qpair failed and we were unable to recover it. 00:39:20.467 [2024-07-22 19:43:39.157710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.467 [2024-07-22 19:43:39.157806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.467 [2024-07-22 19:43:39.157828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.467 [2024-07-22 19:43:39.157839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.467 [2024-07-22 19:43:39.157848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.467 [2024-07-22 19:43:39.157869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.467 qpair failed and we were unable to recover it. 00:39:20.467 [2024-07-22 19:43:39.167464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.467 [2024-07-22 19:43:39.167553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.467 [2024-07-22 19:43:39.167578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.467 [2024-07-22 19:43:39.167589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.467 [2024-07-22 19:43:39.167598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.467 [2024-07-22 19:43:39.167619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.467 qpair failed and we were unable to recover it. 00:39:20.467 [2024-07-22 19:43:39.177566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.467 [2024-07-22 19:43:39.177652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.467 [2024-07-22 19:43:39.177673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.467 [2024-07-22 19:43:39.177684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.467 [2024-07-22 19:43:39.177693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.467 [2024-07-22 19:43:39.177715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.467 qpair failed and we were unable to recover it. 00:39:20.467 [2024-07-22 19:43:39.187595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.467 [2024-07-22 19:43:39.187680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.467 [2024-07-22 19:43:39.187705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.467 [2024-07-22 19:43:39.187717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.467 [2024-07-22 19:43:39.187726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.467 [2024-07-22 19:43:39.187747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.467 qpair failed and we were unable to recover it. 00:39:20.467 [2024-07-22 19:43:39.197804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.467 [2024-07-22 19:43:39.197903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.467 [2024-07-22 19:43:39.197927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.197938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.197947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.197968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.207644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.207764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.207785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.207796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.207805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.207826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.217715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.217796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.217817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.217829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.217837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.217860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.227630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.227713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.227735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.227747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.227759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.227780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.237961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.238051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.238072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.238084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.238093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.238114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.247854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.247936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.247957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.247970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.247978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.247999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.257765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.257849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.257870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.257882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.257891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.257911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.267814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.267898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.267919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.267931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.267940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.267961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.278026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.278119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.278141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.278152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.278161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.278181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.287854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.287942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.287964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.287976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.287985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.288006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.297889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.297971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.297992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.298011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.298019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.298040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.307850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.307933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.307955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.307966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.307975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.307995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.318147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.318241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.318263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.318273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.468 [2024-07-22 19:43:39.318285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.468 [2024-07-22 19:43:39.318307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.468 qpair failed and we were unable to recover it. 00:39:20.468 [2024-07-22 19:43:39.327972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.468 [2024-07-22 19:43:39.328063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.468 [2024-07-22 19:43:39.328084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.468 [2024-07-22 19:43:39.328095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.328104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.328124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.469 [2024-07-22 19:43:39.338016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.469 [2024-07-22 19:43:39.338115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.469 [2024-07-22 19:43:39.338135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.469 [2024-07-22 19:43:39.338147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.338155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.338176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.469 [2024-07-22 19:43:39.348023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.469 [2024-07-22 19:43:39.348108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.469 [2024-07-22 19:43:39.348129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.469 [2024-07-22 19:43:39.348140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.348148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.348168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.469 [2024-07-22 19:43:39.358249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.469 [2024-07-22 19:43:39.358342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.469 [2024-07-22 19:43:39.358364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.469 [2024-07-22 19:43:39.358375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.358383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.358403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.469 [2024-07-22 19:43:39.368015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.469 [2024-07-22 19:43:39.368097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.469 [2024-07-22 19:43:39.368118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.469 [2024-07-22 19:43:39.368129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.368138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.368159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.469 [2024-07-22 19:43:39.378105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.469 [2024-07-22 19:43:39.378187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.469 [2024-07-22 19:43:39.378213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.469 [2024-07-22 19:43:39.378225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.378233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.378254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.469 [2024-07-22 19:43:39.388142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.469 [2024-07-22 19:43:39.388226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.469 [2024-07-22 19:43:39.388247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.469 [2024-07-22 19:43:39.388259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.388268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.388289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.469 [2024-07-22 19:43:39.398359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.469 [2024-07-22 19:43:39.398452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.469 [2024-07-22 19:43:39.398473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.469 [2024-07-22 19:43:39.398484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.398492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.398513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.469 [2024-07-22 19:43:39.408257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.469 [2024-07-22 19:43:39.408374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.469 [2024-07-22 19:43:39.408396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.469 [2024-07-22 19:43:39.408410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.408419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.408440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.469 [2024-07-22 19:43:39.418242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.469 [2024-07-22 19:43:39.418324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.469 [2024-07-22 19:43:39.418345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.469 [2024-07-22 19:43:39.418356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.469 [2024-07-22 19:43:39.418364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.469 [2024-07-22 19:43:39.418386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.469 qpair failed and we were unable to recover it. 00:39:20.731 [2024-07-22 19:43:39.428240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.731 [2024-07-22 19:43:39.428323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.731 [2024-07-22 19:43:39.428344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.731 [2024-07-22 19:43:39.428355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.731 [2024-07-22 19:43:39.428364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.731 [2024-07-22 19:43:39.428384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.731 qpair failed and we were unable to recover it. 00:39:20.731 [2024-07-22 19:43:39.438449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.731 [2024-07-22 19:43:39.438538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.731 [2024-07-22 19:43:39.438559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.731 [2024-07-22 19:43:39.438570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.731 [2024-07-22 19:43:39.438579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.731 [2024-07-22 19:43:39.438599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.731 qpair failed and we were unable to recover it. 00:39:20.731 [2024-07-22 19:43:39.448369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.731 [2024-07-22 19:43:39.448478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.731 [2024-07-22 19:43:39.448499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.731 [2024-07-22 19:43:39.448510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.731 [2024-07-22 19:43:39.448519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.448540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.458363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.458450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.458471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.458482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.458491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.458512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.468429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.468535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.468560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.468572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.468581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.468602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.478592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.478683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.478704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.478716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.478725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.478746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.488417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.488514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.488535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.488546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.488554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.488574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.498462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.498545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.498569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.498580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.498588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.498609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.508489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.508568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.508589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.508600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.508609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.508629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.518787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.518933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.518954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.518965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.518973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.518995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.528524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.528644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.528665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.528676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.528684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.528705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.538578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.538663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.538684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.538695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.538703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.538728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.548566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.548649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.548670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.548681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.548690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.548710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.558913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.559036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.559057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.559068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.559077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.559100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.568646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.568748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.568769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.568781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.568790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.568810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.578741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.732 [2024-07-22 19:43:39.578841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.732 [2024-07-22 19:43:39.578861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.732 [2024-07-22 19:43:39.578873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.732 [2024-07-22 19:43:39.578882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.732 [2024-07-22 19:43:39.578902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.732 qpair failed and we were unable to recover it. 00:39:20.732 [2024-07-22 19:43:39.588688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.588788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.588823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.588837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.588847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.588874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.733 [2024-07-22 19:43:39.598983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.599095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.599126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.599140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.599150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.599178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.733 [2024-07-22 19:43:39.608716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.608826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.608849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.608862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.608871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.608893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.733 [2024-07-22 19:43:39.618799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.618883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.618905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.618917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.618926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.618948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.733 [2024-07-22 19:43:39.628826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.628908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.628930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.628941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.628954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.628975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.733 [2024-07-22 19:43:39.639083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.639186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.639212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.639224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.639232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.639253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.733 [2024-07-22 19:43:39.648861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.648947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.648968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.648980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.648988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.649009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.733 [2024-07-22 19:43:39.658893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.658978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.659000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.659011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.659019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.659040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.733 [2024-07-22 19:43:39.668872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.668953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.668974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.668985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.668993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.669014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.733 [2024-07-22 19:43:39.679135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.733 [2024-07-22 19:43:39.679230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.733 [2024-07-22 19:43:39.679259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.733 [2024-07-22 19:43:39.679271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.733 [2024-07-22 19:43:39.679280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.733 [2024-07-22 19:43:39.679302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.733 qpair failed and we were unable to recover it. 00:39:20.995 [2024-07-22 19:43:39.688979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.995 [2024-07-22 19:43:39.689073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.995 [2024-07-22 19:43:39.689094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.995 [2024-07-22 19:43:39.689106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.995 [2024-07-22 19:43:39.689115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.995 [2024-07-22 19:43:39.689139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.995 qpair failed and we were unable to recover it. 00:39:20.995 [2024-07-22 19:43:39.699027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.995 [2024-07-22 19:43:39.699111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.995 [2024-07-22 19:43:39.699132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.995 [2024-07-22 19:43:39.699143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.995 [2024-07-22 19:43:39.699152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.995 [2024-07-22 19:43:39.699173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.995 qpair failed and we were unable to recover it. 00:39:20.995 [2024-07-22 19:43:39.709063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.995 [2024-07-22 19:43:39.709157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.995 [2024-07-22 19:43:39.709179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.995 [2024-07-22 19:43:39.709190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.995 [2024-07-22 19:43:39.709206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.995 [2024-07-22 19:43:39.709228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.995 qpair failed and we were unable to recover it. 00:39:20.995 [2024-07-22 19:43:39.719271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.995 [2024-07-22 19:43:39.719364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.995 [2024-07-22 19:43:39.719385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.995 [2024-07-22 19:43:39.719397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.995 [2024-07-22 19:43:39.719409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.995 [2024-07-22 19:43:39.719431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.995 qpair failed and we were unable to recover it. 00:39:20.995 [2024-07-22 19:43:39.728996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.995 [2024-07-22 19:43:39.729083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.995 [2024-07-22 19:43:39.729105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.995 [2024-07-22 19:43:39.729116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.995 [2024-07-22 19:43:39.729125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.995 [2024-07-22 19:43:39.729146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.995 qpair failed and we were unable to recover it. 00:39:20.995 [2024-07-22 19:43:39.739241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.995 [2024-07-22 19:43:39.739326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.995 [2024-07-22 19:43:39.739348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.995 [2024-07-22 19:43:39.739359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.995 [2024-07-22 19:43:39.739368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.995 [2024-07-22 19:43:39.739390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.995 qpair failed and we were unable to recover it. 00:39:20.995 [2024-07-22 19:43:39.749168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.995 [2024-07-22 19:43:39.749249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.995 [2024-07-22 19:43:39.749271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.995 [2024-07-22 19:43:39.749282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.995 [2024-07-22 19:43:39.749291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.995 [2024-07-22 19:43:39.749312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.995 qpair failed and we were unable to recover it. 00:39:20.995 [2024-07-22 19:43:39.759398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.995 [2024-07-22 19:43:39.759491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.995 [2024-07-22 19:43:39.759512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.995 [2024-07-22 19:43:39.759525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.995 [2024-07-22 19:43:39.759533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.995 [2024-07-22 19:43:39.759554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.769185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.769278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.769299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.769311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.769319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.769341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.779226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.779311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.779331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.779342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.779352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.779373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.789310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.789430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.789452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.789463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.789471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.789493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.799514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.799601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.799622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.799633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.799642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.799664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.809291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.809371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.809392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.809407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.809421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.809442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.819323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.819426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.819448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.819459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.819468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.819488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.829343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.829426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.829448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.829459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.829467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.829488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.839535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.839622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.839643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.839654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.839663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.839683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.849365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.849447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.849469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.849479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.849488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.849510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.859527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.859613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.859635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.859646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.859655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.859676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.869503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.869613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.869637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.869648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.869657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.869681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.879691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.879786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.879808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.879819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.879827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.879849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.996 [2024-07-22 19:43:39.889455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.996 [2024-07-22 19:43:39.889545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.996 [2024-07-22 19:43:39.889566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.996 [2024-07-22 19:43:39.889577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.996 [2024-07-22 19:43:39.889587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.996 [2024-07-22 19:43:39.889608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.996 qpair failed and we were unable to recover it. 00:39:20.997 [2024-07-22 19:43:39.899559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.997 [2024-07-22 19:43:39.899644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.997 [2024-07-22 19:43:39.899669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.997 [2024-07-22 19:43:39.899680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.997 [2024-07-22 19:43:39.899689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.997 [2024-07-22 19:43:39.899710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.997 qpair failed and we were unable to recover it. 00:39:20.997 [2024-07-22 19:43:39.909580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.997 [2024-07-22 19:43:39.909662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.997 [2024-07-22 19:43:39.909683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.997 [2024-07-22 19:43:39.909695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.997 [2024-07-22 19:43:39.909703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.997 [2024-07-22 19:43:39.909725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.997 qpair failed and we were unable to recover it. 00:39:20.997 [2024-07-22 19:43:39.919896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.997 [2024-07-22 19:43:39.920048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.997 [2024-07-22 19:43:39.920069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.997 [2024-07-22 19:43:39.920080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.997 [2024-07-22 19:43:39.920089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.997 [2024-07-22 19:43:39.920110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.997 qpair failed and we were unable to recover it. 00:39:20.997 [2024-07-22 19:43:39.929641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.997 [2024-07-22 19:43:39.929730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.997 [2024-07-22 19:43:39.929752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.997 [2024-07-22 19:43:39.929763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.997 [2024-07-22 19:43:39.929772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.997 [2024-07-22 19:43:39.929793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.997 qpair failed and we were unable to recover it. 00:39:20.997 [2024-07-22 19:43:39.939621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:20.997 [2024-07-22 19:43:39.939703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:20.997 [2024-07-22 19:43:39.939724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:20.997 [2024-07-22 19:43:39.939735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:20.997 [2024-07-22 19:43:39.939744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:20.997 [2024-07-22 19:43:39.939768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:20.997 qpair failed and we were unable to recover it. 00:39:21.259 [2024-07-22 19:43:39.949693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.259 [2024-07-22 19:43:39.949782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.259 [2024-07-22 19:43:39.949804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.259 [2024-07-22 19:43:39.949816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.259 [2024-07-22 19:43:39.949825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.259 [2024-07-22 19:43:39.949846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.259 qpair failed and we were unable to recover it. 00:39:21.259 [2024-07-22 19:43:39.959919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.259 [2024-07-22 19:43:39.960023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.259 [2024-07-22 19:43:39.960047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.259 [2024-07-22 19:43:39.960059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.259 [2024-07-22 19:43:39.960067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.259 [2024-07-22 19:43:39.960088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.259 qpair failed and we were unable to recover it. 00:39:21.259 [2024-07-22 19:43:39.969712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.259 [2024-07-22 19:43:39.969799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.259 [2024-07-22 19:43:39.969822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.259 [2024-07-22 19:43:39.969834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.259 [2024-07-22 19:43:39.969843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.259 [2024-07-22 19:43:39.969864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.259 qpair failed and we were unable to recover it. 00:39:21.259 [2024-07-22 19:43:39.979792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.259 [2024-07-22 19:43:39.979876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.259 [2024-07-22 19:43:39.979897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.259 [2024-07-22 19:43:39.979909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.259 [2024-07-22 19:43:39.979917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.259 [2024-07-22 19:43:39.979939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.259 qpair failed and we were unable to recover it. 00:39:21.259 [2024-07-22 19:43:39.989814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.259 [2024-07-22 19:43:39.989898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.259 [2024-07-22 19:43:39.989923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.259 [2024-07-22 19:43:39.989935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.259 [2024-07-22 19:43:39.989944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.259 [2024-07-22 19:43:39.989964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.259 qpair failed and we were unable to recover it. 00:39:21.259 [2024-07-22 19:43:40.000059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.259 [2024-07-22 19:43:40.000152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.259 [2024-07-22 19:43:40.000174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.259 [2024-07-22 19:43:40.000185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.259 [2024-07-22 19:43:40.000194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.259 [2024-07-22 19:43:40.000220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.259 qpair failed and we were unable to recover it. 00:39:21.259 [2024-07-22 19:43:40.009870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.259 [2024-07-22 19:43:40.009959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.259 [2024-07-22 19:43:40.009980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.259 [2024-07-22 19:43:40.009992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.259 [2024-07-22 19:43:40.010001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.259 [2024-07-22 19:43:40.010022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.259 qpair failed and we were unable to recover it. 00:39:21.259 [2024-07-22 19:43:40.020216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.259 [2024-07-22 19:43:40.020337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.259 [2024-07-22 19:43:40.020358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.259 [2024-07-22 19:43:40.020370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.259 [2024-07-22 19:43:40.020379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.259 [2024-07-22 19:43:40.020401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.259 qpair failed and we were unable to recover it. 00:39:21.259 [2024-07-22 19:43:40.029918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.259 [2024-07-22 19:43:40.030006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.259 [2024-07-22 19:43:40.030028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.259 [2024-07-22 19:43:40.030040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.259 [2024-07-22 19:43:40.030049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.259 [2024-07-22 19:43:40.030074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.040157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.040293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.040315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.040326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.040335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.040356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.049971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.050067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.050088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.050099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.050108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.050129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.060015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.060104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.060127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.060138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.060147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.060169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.070048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.070126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.070153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.070165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.070174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.070195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.080273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.080369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.080390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.080401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.080410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.080432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.090140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.090249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.090271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.090282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.090291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.090311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.100124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.100269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.100291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.100302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.100310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.100331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.110117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.110205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.110226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.110238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.110247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.110268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.120342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.120433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.120454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.120466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.120477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.120498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.130145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.130238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.130261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.130273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.130281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.130302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.140221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.140301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.140322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.140333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.140342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.140362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.150196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.150281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.150303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.150314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.150323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.150344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.160469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.260 [2024-07-22 19:43:40.160558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.260 [2024-07-22 19:43:40.160579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.260 [2024-07-22 19:43:40.160591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.260 [2024-07-22 19:43:40.160599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.260 [2024-07-22 19:43:40.160620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.260 qpair failed and we were unable to recover it. 00:39:21.260 [2024-07-22 19:43:40.170282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.261 [2024-07-22 19:43:40.170367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.261 [2024-07-22 19:43:40.170388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.261 [2024-07-22 19:43:40.170400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.261 [2024-07-22 19:43:40.170409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.261 [2024-07-22 19:43:40.170429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.261 qpair failed and we were unable to recover it. 00:39:21.261 [2024-07-22 19:43:40.180340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.261 [2024-07-22 19:43:40.180422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.261 [2024-07-22 19:43:40.180442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.261 [2024-07-22 19:43:40.180453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.261 [2024-07-22 19:43:40.180463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.261 [2024-07-22 19:43:40.180484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.261 qpair failed and we were unable to recover it. 00:39:21.261 [2024-07-22 19:43:40.190375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.261 [2024-07-22 19:43:40.190456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.261 [2024-07-22 19:43:40.190478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.261 [2024-07-22 19:43:40.190489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.261 [2024-07-22 19:43:40.190498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.261 [2024-07-22 19:43:40.190519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.261 qpair failed and we were unable to recover it. 00:39:21.261 [2024-07-22 19:43:40.200588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.261 [2024-07-22 19:43:40.200683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.261 [2024-07-22 19:43:40.200705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.261 [2024-07-22 19:43:40.200716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.261 [2024-07-22 19:43:40.200725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.261 [2024-07-22 19:43:40.200746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.261 qpair failed and we were unable to recover it. 00:39:21.261 [2024-07-22 19:43:40.210435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.261 [2024-07-22 19:43:40.210523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.261 [2024-07-22 19:43:40.210545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.261 [2024-07-22 19:43:40.210559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.261 [2024-07-22 19:43:40.210568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.261 [2024-07-22 19:43:40.210589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.261 qpair failed and we were unable to recover it. 00:39:21.524 [2024-07-22 19:43:40.220389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.524 [2024-07-22 19:43:40.220473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.524 [2024-07-22 19:43:40.220494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.524 [2024-07-22 19:43:40.220506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.524 [2024-07-22 19:43:40.220515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.524 [2024-07-22 19:43:40.220536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.524 qpair failed and we were unable to recover it. 00:39:21.524 [2024-07-22 19:43:40.230486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.524 [2024-07-22 19:43:40.230578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.524 [2024-07-22 19:43:40.230600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.524 [2024-07-22 19:43:40.230611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.524 [2024-07-22 19:43:40.230619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.524 [2024-07-22 19:43:40.230640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.524 qpair failed and we were unable to recover it. 00:39:21.524 [2024-07-22 19:43:40.240727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.524 [2024-07-22 19:43:40.240838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.524 [2024-07-22 19:43:40.240860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.524 [2024-07-22 19:43:40.240871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.524 [2024-07-22 19:43:40.240880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.524 [2024-07-22 19:43:40.240900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.524 qpair failed and we were unable to recover it. 00:39:21.524 [2024-07-22 19:43:40.250628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.524 [2024-07-22 19:43:40.250714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.524 [2024-07-22 19:43:40.250735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.524 [2024-07-22 19:43:40.250746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.524 [2024-07-22 19:43:40.250755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.524 [2024-07-22 19:43:40.250776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.524 qpair failed and we were unable to recover it. 00:39:21.524 [2024-07-22 19:43:40.260493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.524 [2024-07-22 19:43:40.260579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.524 [2024-07-22 19:43:40.260601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.524 [2024-07-22 19:43:40.260612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.524 [2024-07-22 19:43:40.260621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.524 [2024-07-22 19:43:40.260641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.524 qpair failed and we were unable to recover it. 00:39:21.524 [2024-07-22 19:43:40.270572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.524 [2024-07-22 19:43:40.270655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.524 [2024-07-22 19:43:40.270676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.270689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.270697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.270718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.280813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.525 [2024-07-22 19:43:40.280904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.525 [2024-07-22 19:43:40.280924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.280935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.280944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.280964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.290625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.525 [2024-07-22 19:43:40.290725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.525 [2024-07-22 19:43:40.290756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.290770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.290780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.290807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.300700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.525 [2024-07-22 19:43:40.300785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.525 [2024-07-22 19:43:40.300808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.300824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.300833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.300856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.310691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.525 [2024-07-22 19:43:40.310778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.525 [2024-07-22 19:43:40.310801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.310813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.310822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.310844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.320970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.525 [2024-07-22 19:43:40.321061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.525 [2024-07-22 19:43:40.321084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.321098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.321108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.321152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.330752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.525 [2024-07-22 19:43:40.330841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.525 [2024-07-22 19:43:40.330863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.330875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.330883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.330905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.340820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.525 [2024-07-22 19:43:40.340900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.525 [2024-07-22 19:43:40.340922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.340933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.340942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.340963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.350798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.525 [2024-07-22 19:43:40.350890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.525 [2024-07-22 19:43:40.350912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.350925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.350934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.350955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.361033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.525 [2024-07-22 19:43:40.361129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.525 [2024-07-22 19:43:40.361150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.525 [2024-07-22 19:43:40.361162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.525 [2024-07-22 19:43:40.361170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.525 [2024-07-22 19:43:40.361191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.525 qpair failed and we were unable to recover it. 00:39:21.525 [2024-07-22 19:43:40.370851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.370985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.371009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.371020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.371029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.526 [2024-07-22 19:43:40.371050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.526 qpair failed and we were unable to recover it. 00:39:21.526 [2024-07-22 19:43:40.380835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.380919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.380940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.380952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.380961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.526 [2024-07-22 19:43:40.380982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.526 qpair failed and we were unable to recover it. 00:39:21.526 [2024-07-22 19:43:40.390921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.391003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.391027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.391038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.391047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.526 [2024-07-22 19:43:40.391069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.526 qpair failed and we were unable to recover it. 00:39:21.526 [2024-07-22 19:43:40.401126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.401228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.401250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.401262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.401271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.526 [2024-07-22 19:43:40.401292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.526 qpair failed and we were unable to recover it. 00:39:21.526 [2024-07-22 19:43:40.411000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.411084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.411105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.411116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.411125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.526 [2024-07-22 19:43:40.411147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.526 qpair failed and we were unable to recover it. 00:39:21.526 [2024-07-22 19:43:40.421050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.421131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.421152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.421163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.421172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.526 [2024-07-22 19:43:40.421193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.526 qpair failed and we were unable to recover it. 00:39:21.526 [2024-07-22 19:43:40.431055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.431140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.431161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.431172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.431181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.526 [2024-07-22 19:43:40.431250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.526 qpair failed and we were unable to recover it. 00:39:21.526 [2024-07-22 19:43:40.441303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.441410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.441432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.441444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.441452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.526 [2024-07-22 19:43:40.441473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.526 qpair failed and we were unable to recover it. 00:39:21.526 [2024-07-22 19:43:40.451194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.451286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.451307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.451319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.451328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.526 [2024-07-22 19:43:40.451349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.526 qpair failed and we were unable to recover it. 00:39:21.526 [2024-07-22 19:43:40.461118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.526 [2024-07-22 19:43:40.461212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.526 [2024-07-22 19:43:40.461233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.526 [2024-07-22 19:43:40.461244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.526 [2024-07-22 19:43:40.461253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.527 [2024-07-22 19:43:40.461275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.527 qpair failed and we were unable to recover it. 00:39:21.527 [2024-07-22 19:43:40.471179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.527 [2024-07-22 19:43:40.471273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.527 [2024-07-22 19:43:40.471295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.527 [2024-07-22 19:43:40.471306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.527 [2024-07-22 19:43:40.471316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.527 [2024-07-22 19:43:40.471337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.527 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.481371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.481463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.481487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.481498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.481507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.481528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.491199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.491287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.491308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.491320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.491330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.491350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.501248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.501372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.501394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.501405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.501413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.501435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.511250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.511354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.511375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.511386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.511395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.511416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.521522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.521614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.521635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.521647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.521658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.521680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.531313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.531399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.531420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.531432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.531441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.531462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.541355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.541437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.541458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.541469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.541478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.541499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.551445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.551553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.551574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.551586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.551594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.551615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.561513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.561602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.561623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.561635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.561644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.561666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.571502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.571621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.571642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.571653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.571662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.571683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.581473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.581554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.581578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.581595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.581604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.581625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.591481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.591566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.591587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.591599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.591607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.789 [2024-07-22 19:43:40.591628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.789 qpair failed and we were unable to recover it. 00:39:21.789 [2024-07-22 19:43:40.601703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.789 [2024-07-22 19:43:40.601797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.789 [2024-07-22 19:43:40.601819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.789 [2024-07-22 19:43:40.601830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.789 [2024-07-22 19:43:40.601838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.601859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.611574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.611662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.611683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.611699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.611708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.611729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.621564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.621648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.621669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.621681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.621690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.621711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.631640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.631723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.631744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.631756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.631765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.631786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.641825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.641917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.641939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.641950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.641959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.641980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.651661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.651754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.651785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.651800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.651810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.651836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.661646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.661749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.661773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.661785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.661794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.661817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.671746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.671842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.671864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.671876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.671885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.671906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.681946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.682049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.682080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.682094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.682104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.682131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.691747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.691845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.691876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.691891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.691901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.691929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.701890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.701983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.702015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.702033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.702043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.702071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.711917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.712004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.712027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.712039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.712049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.712072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.722042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.722136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.722160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.722171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.722180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.790 [2024-07-22 19:43:40.722206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.790 qpair failed and we were unable to recover it. 00:39:21.790 [2024-07-22 19:43:40.731877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:21.790 [2024-07-22 19:43:40.731960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:21.790 [2024-07-22 19:43:40.731981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:21.790 [2024-07-22 19:43:40.731994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.790 [2024-07-22 19:43:40.732003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:21.791 [2024-07-22 19:43:40.732024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:21.791 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.741960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.742044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.742065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.742078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.742086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.742108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.752007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.752122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.752144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.752155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.752164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.752185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.762154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.762305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.762327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.762338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.762347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.762368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.771969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.772056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.772078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.772090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.772098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.772119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.782051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.782156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.782178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.782189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.782197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.782225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.792028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.792113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.792137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.792150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.792159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.792180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.802284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.802386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.802407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.802418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.802428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.802452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.812093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.812179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.812205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.812217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.812226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.812247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.822142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.822225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.822246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.822258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.822267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.822289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.832216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.832295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.832317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.832328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.832337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.832361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.842399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.842488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.842509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.842521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.842530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.842551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.852224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.852309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.053 [2024-07-22 19:43:40.852330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.053 [2024-07-22 19:43:40.852342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.053 [2024-07-22 19:43:40.852351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.053 [2024-07-22 19:43:40.852372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.053 qpair failed and we were unable to recover it. 00:39:22.053 [2024-07-22 19:43:40.862243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.053 [2024-07-22 19:43:40.862423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.862445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.862457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.862466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.862488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.872267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.872350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.872372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.872384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.872393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.872413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.882461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.882550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.882575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.882587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.882596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.882617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.892293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.892378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.892398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.892410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.892418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.892439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.902361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.902471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.902492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.902503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.902512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.902533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.912419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.912502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.912523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.912535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.912544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.912564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.922626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.922739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.922761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.922772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.922784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.922805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.932442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.932530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.932551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.932563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.932571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.932592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.942477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.942562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.942584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.942596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.942605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.942626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.952503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.952589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.952611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.952622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.952631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.952652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.962762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.962852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.962873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.962885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.962894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.962915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.972562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.972683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.972705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.972716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.972724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.972745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.982632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.982757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.982778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.982789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.054 [2024-07-22 19:43:40.982798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.054 [2024-07-22 19:43:40.982819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.054 qpair failed and we were unable to recover it. 00:39:22.054 [2024-07-22 19:43:40.992532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.054 [2024-07-22 19:43:40.992632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.054 [2024-07-22 19:43:40.992653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.054 [2024-07-22 19:43:40.992665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.055 [2024-07-22 19:43:40.992674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.055 [2024-07-22 19:43:40.992695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.055 qpair failed and we were unable to recover it. 00:39:22.055 [2024-07-22 19:43:41.002860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.055 [2024-07-22 19:43:41.002952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.055 [2024-07-22 19:43:41.002973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.055 [2024-07-22 19:43:41.002985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.055 [2024-07-22 19:43:41.002994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.055 [2024-07-22 19:43:41.003015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.055 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.012750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.012835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.012856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.012869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.012884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.012905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.317 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.022725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.022808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.022829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.022841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.022851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.022873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.317 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.032716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.032801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.032822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.032834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.032844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.032864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.317 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.042952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.043039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.043060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.043072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.043080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.043101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.317 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.052772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.052878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.052900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.052911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.052920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.052941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.317 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.062836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.062964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.062989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.063000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.063009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.063031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.317 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.072815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.072895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.072916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.072927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.072936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.072958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.317 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.083038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.083129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.083152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.083164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.083174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.083195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.317 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.092867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.092952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.092973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.092986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.093000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.093021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.317 qpair failed and we were unable to recover it. 00:39:22.317 [2024-07-22 19:43:41.102913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.317 [2024-07-22 19:43:41.103001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.317 [2024-07-22 19:43:41.103023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.317 [2024-07-22 19:43:41.103038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.317 [2024-07-22 19:43:41.103047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.317 [2024-07-22 19:43:41.103068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.112926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.113012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.113033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.113044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.113054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.113077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.123150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.123243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.123264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.123275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.123283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.123305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.132999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.133083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.133106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.133118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.133127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.133148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.143094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.143180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.143209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.143221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.143230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.143252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.153033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.153119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.153140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.153152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.153161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.153181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.163275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.163367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.163389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.163400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.163409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.163430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.173145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.173247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.173268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.173279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.173288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.173309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.183157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.183242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.183263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.183275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.183284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.183305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.193180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.193269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.193294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.193306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.193314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.193335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.203368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.203455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.203476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.203488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.203496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.203517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.213209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.213292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.213314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.213326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.213335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.213355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.223179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.223268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.223289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.223301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.223310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.223332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.233289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.318 [2024-07-22 19:43:41.233398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.318 [2024-07-22 19:43:41.233420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.318 [2024-07-22 19:43:41.233432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.318 [2024-07-22 19:43:41.233440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.318 [2024-07-22 19:43:41.233464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.318 qpair failed and we were unable to recover it. 00:39:22.318 [2024-07-22 19:43:41.243432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.319 [2024-07-22 19:43:41.243519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.319 [2024-07-22 19:43:41.243540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.319 [2024-07-22 19:43:41.243552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.319 [2024-07-22 19:43:41.243560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.319 [2024-07-22 19:43:41.243582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.319 qpair failed and we were unable to recover it. 00:39:22.319 [2024-07-22 19:43:41.253307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.319 [2024-07-22 19:43:41.253395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.319 [2024-07-22 19:43:41.253416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.319 [2024-07-22 19:43:41.253428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.319 [2024-07-22 19:43:41.253437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.319 [2024-07-22 19:43:41.253457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.319 qpair failed and we were unable to recover it. 00:39:22.319 [2024-07-22 19:43:41.263336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.319 [2024-07-22 19:43:41.263449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.319 [2024-07-22 19:43:41.263471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.319 [2024-07-22 19:43:41.263482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.319 [2024-07-22 19:43:41.263490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.319 [2024-07-22 19:43:41.263511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.319 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.273387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.273468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.273489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.273501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.273510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.273531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.283595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.283684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.283709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.283720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.283729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.283749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.293422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.293510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.293532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.293544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.293552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.293573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.303479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.303559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.303580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.303592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.303601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.303622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.313449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.313531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.313552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.313564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.313572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.313592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.323728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.323835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.323857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.323868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.323880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.323901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.333543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.333627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.333649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.333660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.333669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.333689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.343619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.343701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.343722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.343734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.343742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.343763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.353602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.353686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.353713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.353725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.353733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.353755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.363814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.363903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.363925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.581 [2024-07-22 19:43:41.363937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.581 [2024-07-22 19:43:41.363946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.581 [2024-07-22 19:43:41.363966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.581 qpair failed and we were unable to recover it. 00:39:22.581 [2024-07-22 19:43:41.373683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.581 [2024-07-22 19:43:41.373828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.581 [2024-07-22 19:43:41.373860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.373873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.373883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.373910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.383676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.383768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.383799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.383813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.383823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.383850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.393711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.393805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.393837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.393851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.393860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.393887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.403921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.404038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.404069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.404083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.404093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.404120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.413772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.413858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.413881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.413894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.413909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.413932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.423843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.423922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.423944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.423957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.423966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.423987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.433808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.433886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.433907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.433919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.433929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.433950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.444038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.444170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.444191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.444210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.444220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.444242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.453851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.453934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.453955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.453967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.453975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.453996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.463840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.463921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.463943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.463955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.463964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.463985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.473943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.474027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.474049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.474061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.474069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.474090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.484247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.484340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.484362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.484373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.484382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.484404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.493984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.494067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.494088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.494100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.494108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.582 [2024-07-22 19:43:41.494129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.582 qpair failed and we were unable to recover it. 00:39:22.582 [2024-07-22 19:43:41.504009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.582 [2024-07-22 19:43:41.504096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.582 [2024-07-22 19:43:41.504118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.582 [2024-07-22 19:43:41.504132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.582 [2024-07-22 19:43:41.504141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.583 [2024-07-22 19:43:41.504163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.583 qpair failed and we were unable to recover it. 00:39:22.583 [2024-07-22 19:43:41.514045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.583 [2024-07-22 19:43:41.514127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.583 [2024-07-22 19:43:41.514148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.583 [2024-07-22 19:43:41.514160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.583 [2024-07-22 19:43:41.514169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.583 [2024-07-22 19:43:41.514190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.583 qpair failed and we were unable to recover it. 00:39:22.583 [2024-07-22 19:43:41.524266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.583 [2024-07-22 19:43:41.524366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.583 [2024-07-22 19:43:41.524392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.583 [2024-07-22 19:43:41.524404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.583 [2024-07-22 19:43:41.524412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.583 [2024-07-22 19:43:41.524434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.583 qpair failed and we were unable to recover it. 00:39:22.845 [2024-07-22 19:43:41.534097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.845 [2024-07-22 19:43:41.534182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.845 [2024-07-22 19:43:41.534208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.845 [2024-07-22 19:43:41.534220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.845 [2024-07-22 19:43:41.534230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.845 [2024-07-22 19:43:41.534252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.845 qpair failed and we were unable to recover it. 00:39:22.845 [2024-07-22 19:43:41.544130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.845 [2024-07-22 19:43:41.544214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.845 [2024-07-22 19:43:41.544236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.845 [2024-07-22 19:43:41.544248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.845 [2024-07-22 19:43:41.544256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.845 [2024-07-22 19:43:41.544277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.845 qpair failed and we were unable to recover it. 00:39:22.845 [2024-07-22 19:43:41.554158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.845 [2024-07-22 19:43:41.554233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.845 [2024-07-22 19:43:41.554255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.845 [2024-07-22 19:43:41.554267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.845 [2024-07-22 19:43:41.554276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.845 [2024-07-22 19:43:41.554297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.845 qpair failed and we were unable to recover it. 00:39:22.845 [2024-07-22 19:43:41.564381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.845 [2024-07-22 19:43:41.564469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.845 [2024-07-22 19:43:41.564489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.845 [2024-07-22 19:43:41.564501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.845 [2024-07-22 19:43:41.564510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.845 [2024-07-22 19:43:41.564530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.845 qpair failed and we were unable to recover it. 00:39:22.845 [2024-07-22 19:43:41.574207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.845 [2024-07-22 19:43:41.574295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.845 [2024-07-22 19:43:41.574316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.845 [2024-07-22 19:43:41.574327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.845 [2024-07-22 19:43:41.574336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.845 [2024-07-22 19:43:41.574357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.845 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.584224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.584308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.584333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.584344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.584353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.584375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.594250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.594336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.594361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.594372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.594380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.594401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.604510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.604598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.604619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.604631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.604640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.604667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.614298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.614384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.614405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.614417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.614426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.614448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.624348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.624462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.624487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.624499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.624508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.624529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.634348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.634443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.634464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.634475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.634484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.634508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.644583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.644690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.644711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.644722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.644731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.644751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.654447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.654564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.654586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.654597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.654606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.654627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.664484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.664581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.664603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.664615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.664623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.664644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.674509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.674614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.674635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.674646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.674655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.674676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.684664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.684752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.684776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.684788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.684796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.684817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.694462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.694546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.694583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.694594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.694602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.694625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.704539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.846 [2024-07-22 19:43:41.704622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.846 [2024-07-22 19:43:41.704643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.846 [2024-07-22 19:43:41.704655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.846 [2024-07-22 19:43:41.704664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.846 [2024-07-22 19:43:41.704685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.846 qpair failed and we were unable to recover it. 00:39:22.846 [2024-07-22 19:43:41.714584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.847 [2024-07-22 19:43:41.714664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.847 [2024-07-22 19:43:41.714685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.847 [2024-07-22 19:43:41.714695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.847 [2024-07-22 19:43:41.714705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.847 [2024-07-22 19:43:41.714726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.847 qpair failed and we were unable to recover it. 00:39:22.847 [2024-07-22 19:43:41.724803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.847 [2024-07-22 19:43:41.724893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.847 [2024-07-22 19:43:41.724914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.847 [2024-07-22 19:43:41.724925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.847 [2024-07-22 19:43:41.724935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.847 [2024-07-22 19:43:41.724958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.847 qpair failed and we were unable to recover it. 00:39:22.847 [2024-07-22 19:43:41.734752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.847 [2024-07-22 19:43:41.734876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.847 [2024-07-22 19:43:41.734898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.847 [2024-07-22 19:43:41.734909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.847 [2024-07-22 19:43:41.734917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.847 [2024-07-22 19:43:41.734938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.847 qpair failed and we were unable to recover it. 00:39:22.847 [2024-07-22 19:43:41.744672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.847 [2024-07-22 19:43:41.744758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.847 [2024-07-22 19:43:41.744779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.847 [2024-07-22 19:43:41.744791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.847 [2024-07-22 19:43:41.744800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.847 [2024-07-22 19:43:41.744821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.847 qpair failed and we were unable to recover it. 00:39:22.847 [2024-07-22 19:43:41.754668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.847 [2024-07-22 19:43:41.754762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.847 [2024-07-22 19:43:41.754783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.847 [2024-07-22 19:43:41.754794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.847 [2024-07-22 19:43:41.754803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.847 [2024-07-22 19:43:41.754823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.847 qpair failed and we were unable to recover it. 00:39:22.847 [2024-07-22 19:43:41.764894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.847 [2024-07-22 19:43:41.764986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.847 [2024-07-22 19:43:41.765008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.847 [2024-07-22 19:43:41.765019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.847 [2024-07-22 19:43:41.765028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.847 [2024-07-22 19:43:41.765049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.847 qpair failed and we were unable to recover it. 00:39:22.847 [2024-07-22 19:43:41.774739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.847 [2024-07-22 19:43:41.774829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.847 [2024-07-22 19:43:41.774850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.847 [2024-07-22 19:43:41.774862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.847 [2024-07-22 19:43:41.774871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.847 [2024-07-22 19:43:41.774892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.847 qpair failed and we were unable to recover it. 00:39:22.847 [2024-07-22 19:43:41.784802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.847 [2024-07-22 19:43:41.784900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.847 [2024-07-22 19:43:41.784921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.847 [2024-07-22 19:43:41.784932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.847 [2024-07-22 19:43:41.784941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.847 [2024-07-22 19:43:41.784962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.847 qpair failed and we were unable to recover it. 00:39:22.847 [2024-07-22 19:43:41.794805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:22.847 [2024-07-22 19:43:41.794905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:22.847 [2024-07-22 19:43:41.794926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:22.847 [2024-07-22 19:43:41.794937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:22.847 [2024-07-22 19:43:41.794945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:22.847 [2024-07-22 19:43:41.794967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:22.847 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.805014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.805103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.805125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.805139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.805149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.805170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.814859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.814991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.815014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.815025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.815037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.815059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.824879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.825020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.825042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.825053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.825062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.825082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.834874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.834988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.835009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.835020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.835030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.835051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.845105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.845209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.845231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.845242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.845251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.845273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.854955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.855041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.855062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.855073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.855082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.855103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.865042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.865130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.865151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.865169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.865178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.865205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.875014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.875096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.875118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.875129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.875138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.875159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.885219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.885308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.885329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.885341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.885349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.885370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.895100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.895190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.895218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.895229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.895237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.895259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.905111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.905193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.905218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.905232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.905242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.905263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.915164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.915235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.915252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.915260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.110 [2024-07-22 19:43:41.915266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.110 [2024-07-22 19:43:41.915285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.110 qpair failed and we were unable to recover it. 00:39:23.110 [2024-07-22 19:43:41.925324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.110 [2024-07-22 19:43:41.925403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.110 [2024-07-22 19:43:41.925420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.110 [2024-07-22 19:43:41.925428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:41.925435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.111 [2024-07-22 19:43:41.925453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.111 qpair failed and we were unable to recover it. 00:39:23.111 [2024-07-22 19:43:41.935181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.111 [2024-07-22 19:43:41.935257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.111 [2024-07-22 19:43:41.935274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.111 [2024-07-22 19:43:41.935283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:41.935289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.111 [2024-07-22 19:43:41.935305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.111 qpair failed and we were unable to recover it. 00:39:23.111 [2024-07-22 19:43:41.945229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.111 [2024-07-22 19:43:41.945308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.111 [2024-07-22 19:43:41.945326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.111 [2024-07-22 19:43:41.945334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:41.945340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.111 [2024-07-22 19:43:41.945356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.111 qpair failed and we were unable to recover it. 00:39:23.111 [2024-07-22 19:43:41.955225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.111 [2024-07-22 19:43:41.955294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.111 [2024-07-22 19:43:41.955311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.111 [2024-07-22 19:43:41.955319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:41.955325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.111 [2024-07-22 19:43:41.955341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.111 qpair failed and we were unable to recover it. 00:39:23.111 [2024-07-22 19:43:41.965387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.111 [2024-07-22 19:43:41.965468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.111 [2024-07-22 19:43:41.965486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.111 [2024-07-22 19:43:41.965494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:41.965501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.111 [2024-07-22 19:43:41.965516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.111 qpair failed and we were unable to recover it. 00:39:23.111 [2024-07-22 19:43:41.975308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.111 [2024-07-22 19:43:41.975386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.111 [2024-07-22 19:43:41.975403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.111 [2024-07-22 19:43:41.975411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:41.975417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.111 [2024-07-22 19:43:41.975433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.111 qpair failed and we were unable to recover it. 00:39:23.111 [2024-07-22 19:43:41.985353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.111 [2024-07-22 19:43:41.985425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.111 [2024-07-22 19:43:41.985443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.111 [2024-07-22 19:43:41.985453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:41.985460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:23.111 [2024-07-22 19:43:41.985479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.111 qpair failed and we were unable to recover it. 00:39:23.111 [2024-07-22 19:43:41.986026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:23.111 [2024-07-22 19:43:41.995321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.111 [2024-07-22 19:43:41.995399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.111 [2024-07-22 19:43:41.995425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.111 [2024-07-22 19:43:41.995435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:41.995442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:23.111 [2024-07-22 19:43:41.995462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:23.111 qpair failed and we were unable to recover it. 00:39:23.111 [2024-07-22 19:43:42.005587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.111 [2024-07-22 19:43:42.005666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.111 [2024-07-22 19:43:42.005683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.111 [2024-07-22 19:43:42.005692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:42.005698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:23.111 [2024-07-22 19:43:42.005715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:23.111 qpair failed and we were unable to recover it. 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Write completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 Read completed with error (sct=0, sc=8) 00:39:23.111 starting I/O failed 00:39:23.111 [2024-07-22 19:43:42.007132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:23.111 [2024-07-22 19:43:42.015535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.111 [2024-07-22 19:43:42.015726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.111 [2024-07-22 19:43:42.015807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.111 [2024-07-22 19:43:42.015857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.111 [2024-07-22 19:43:42.015886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500038fe80 00:39:23.111 [2024-07-22 19:43:42.015960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:23.112 qpair failed and we were unable to recover it. 00:39:23.112 [2024-07-22 19:43:42.025518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.112 [2024-07-22 19:43:42.025680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.112 [2024-07-22 19:43:42.025744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.112 [2024-07-22 19:43:42.025776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.112 [2024-07-22 19:43:42.025800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500038fe80 00:39:23.112 [2024-07-22 19:43:42.025859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:23.112 qpair failed and we were unable to recover it. 00:39:23.112 [2024-07-22 19:43:42.035468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.112 [2024-07-22 19:43:42.035565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.112 [2024-07-22 19:43:42.035599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.112 [2024-07-22 19:43:42.035614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.112 [2024-07-22 19:43:42.035625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000388b80 00:39:23.112 [2024-07-22 19:43:42.035661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:23.112 qpair failed and we were unable to recover it. 00:39:23.112 [2024-07-22 19:43:42.045683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:23.112 [2024-07-22 19:43:42.045801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:23.112 [2024-07-22 19:43:42.045826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:23.112 [2024-07-22 19:43:42.045839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:23.112 [2024-07-22 19:43:42.045849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000388b80 00:39:23.112 [2024-07-22 19:43:42.045877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:23.112 qpair failed and we were unable to recover it. 00:39:23.112 [2024-07-22 19:43:42.046747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:23.373 Initializing NVMe Controllers 00:39:23.373 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:23.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:23.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:39:23.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:39:23.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:39:23.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:39:23.373 Initialization complete. Launching workers. 00:39:23.373 Starting thread on core 1 00:39:23.373 Starting thread on core 2 00:39:23.373 Starting thread on core 0 00:39:23.373 Starting thread on core 3 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:39:23.373 00:39:23.373 real 0m11.505s 00:39:23.373 user 0m19.868s 00:39:23.373 sys 0m4.126s 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:23.373 ************************************ 00:39:23.373 END TEST nvmf_target_disconnect_tc2 00:39:23.373 ************************************ 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:23.373 rmmod nvme_tcp 00:39:23.373 rmmod nvme_fabrics 00:39:23.373 rmmod nvme_keyring 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3186457 ']' 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3186457 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3186457 ']' 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3186457 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3186457 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3186457' 00:39:23.373 killing process with pid 3186457 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3186457 00:39:23.373 19:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3186457 00:39:24.758 19:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:24.758 19:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:24.758 19:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:24.758 19:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:24.758 19:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:24.758 19:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.758 19:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:24.758 19:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.671 19:43:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:26.671 00:39:26.671 real 0m22.235s 00:39:26.671 user 0m49.930s 00:39:26.671 sys 0m9.945s 00:39:26.671 19:43:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:26.671 19:43:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:26.671 ************************************ 00:39:26.671 END TEST nvmf_target_disconnect 00:39:26.671 ************************************ 00:39:26.671 19:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:39:26.671 19:43:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:26.671 00:39:26.671 real 8m7.622s 00:39:26.671 user 18m21.081s 00:39:26.671 sys 2m19.004s 00:39:26.671 19:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:26.671 19:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.671 ************************************ 00:39:26.671 END TEST nvmf_host 00:39:26.671 ************************************ 00:39:26.671 19:43:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:39:26.671 00:39:26.671 real 30m57.853s 00:39:26.671 user 77m52.022s 00:39:26.671 sys 8m0.473s 00:39:26.671 19:43:45 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:26.671 19:43:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:26.671 ************************************ 00:39:26.671 END TEST nvmf_tcp 00:39:26.671 ************************************ 00:39:26.671 19:43:45 -- common/autotest_common.sh@1142 -- # return 0 00:39:26.671 19:43:45 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:39:26.671 19:43:45 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:26.671 19:43:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:26.671 19:43:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:26.671 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:39:26.671 ************************************ 00:39:26.671 START TEST spdkcli_nvmf_tcp 00:39:26.671 ************************************ 00:39:26.671 19:43:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:26.932 * Looking for test storage... 00:39:26.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:26.932 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3188374 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3188374 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3188374 ']' 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:26.933 19:43:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:26.933 [2024-07-22 19:43:45.774674] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:26.933 [2024-07-22 19:43:45.774794] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188374 ] 00:39:26.933 EAL: No free 2048 kB hugepages reported on node 1 00:39:27.194 [2024-07-22 19:43:45.902198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:27.194 [2024-07-22 19:43:46.084293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:27.194 [2024-07-22 19:43:46.084431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:27.765 19:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:27.765 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:27.765 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:27.765 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:27.765 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:27.765 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:27.765 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:27.765 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:27.765 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:27.765 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:27.765 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:27.765 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:27.765 ' 00:39:30.307 [2024-07-22 19:43:48.978401] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:31.248 [2024-07-22 19:43:50.142271] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:33.800 [2024-07-22 19:43:52.276444] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:35.183 [2024-07-22 19:43:54.109835] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:36.566 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:36.566 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:36.566 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:36.566 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:36.566 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:36.566 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:36.566 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:36.566 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:36.566 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:36.566 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:36.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:36.566 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:36.827 19:43:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:36.827 19:43:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:36.827 19:43:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:36.827 19:43:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:36.827 19:43:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:36.827 19:43:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:36.827 19:43:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:36.827 19:43:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:37.087 19:43:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:37.087 19:43:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:37.348 19:43:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:37.348 19:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:37.348 19:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.348 19:43:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:37.348 19:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:37.348 19:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.348 19:43:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:37.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:37.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:37.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:37.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:37.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:37.348 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:37.348 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:37.348 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:37.348 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:37.348 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:37.348 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:37.348 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:37.348 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:37.348 ' 00:39:42.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:42.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:42.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:42.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:42.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:42.682 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:42.682 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:42.682 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:42.682 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:42.682 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:42.682 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:42.682 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:42.682 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:42.682 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3188374 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3188374 ']' 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3188374 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3188374 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3188374' 00:39:42.682 killing process with pid 3188374 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3188374 00:39:42.682 19:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3188374 00:39:43.625 19:44:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:43.625 19:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:43.625 19:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3188374 ']' 00:39:43.626 19:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3188374 00:39:43.626 19:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3188374 ']' 00:39:43.626 19:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3188374 00:39:43.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3188374) - No such process 00:39:43.626 19:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3188374 is not found' 00:39:43.626 Process with pid 3188374 is not found 00:39:43.626 19:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:43.626 19:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:43.626 19:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:43.626 00:39:43.626 real 0m16.698s 00:39:43.626 user 0m33.618s 00:39:43.626 sys 0m0.861s 00:39:43.626 19:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:43.626 19:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:43.626 ************************************ 00:39:43.626 END TEST spdkcli_nvmf_tcp 00:39:43.626 ************************************ 00:39:43.626 19:44:02 -- common/autotest_common.sh@1142 -- # return 0 00:39:43.626 19:44:02 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:43.626 19:44:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:43.626 19:44:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:43.626 19:44:02 -- common/autotest_common.sh@10 -- # set +x 00:39:43.626 ************************************ 00:39:43.626 START TEST nvmf_identify_passthru 00:39:43.626 ************************************ 00:39:43.626 19:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:43.626 * Looking for test storage... 00:39:43.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:43.626 19:44:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:43.626 19:44:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:43.626 19:44:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:43.626 19:44:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:43.626 19:44:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:43.626 19:44:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:43.626 19:44:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:43.626 19:44:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:43.626 19:44:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.626 19:44:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:43.626 19:44:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:43.626 19:44:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:43.626 19:44:02 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:39:43.626 19:44:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:50.213 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:50.214 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:50.214 19:44:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:50.214 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:50.214 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:50.214 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:50.214 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:50.214 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:50.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:50.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:39:50.475 00:39:50.475 --- 10.0.0.2 ping statistics --- 00:39:50.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:50.475 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:50.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:50.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:39:50.475 00:39:50.475 --- 10.0.0.1 ping statistics --- 00:39:50.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:50.475 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:50.475 19:44:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:50.475 19:44:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:50.475 19:44:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:50.475 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:39:50.736 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:39:50.736 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:39:50.736 19:44:09 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:39:50.736 19:44:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:39:50.736 19:44:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:39:50.736 19:44:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:39:50.736 19:44:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:50.736 19:44:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:39:50.736 EAL: No free 2048 kB hugepages reported on node 1 00:39:51.307 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:39:51.307 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:39:51.307 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:39:51.307 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:39:51.307 EAL: No free 2048 kB hugepages reported on node 1 00:39:51.878 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:39:51.878 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:51.878 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:51.878 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3195376 00:39:51.878 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:51.878 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:51.878 19:44:10 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3195376 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3195376 ']' 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:51.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:51.878 19:44:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:51.878 [2024-07-22 19:44:10.795262] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:51.878 [2024-07-22 19:44:10.795373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:52.139 EAL: No free 2048 kB hugepages reported on node 1 00:39:52.140 [2024-07-22 19:44:10.913068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:52.400 [2024-07-22 19:44:11.094117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:52.400 [2024-07-22 19:44:11.094163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:52.400 [2024-07-22 19:44:11.094176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:52.400 [2024-07-22 19:44:11.094186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:52.400 [2024-07-22 19:44:11.094195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:52.400 [2024-07-22 19:44:11.094315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:52.400 [2024-07-22 19:44:11.094399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:52.400 [2024-07-22 19:44:11.094547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.400 [2024-07-22 19:44:11.094573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:52.661 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:52.661 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:39:52.661 19:44:11 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:39:52.661 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.661 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:52.661 INFO: Log level set to 20 00:39:52.661 INFO: Requests: 00:39:52.661 { 00:39:52.661 "jsonrpc": "2.0", 00:39:52.661 "method": "nvmf_set_config", 00:39:52.661 "id": 1, 00:39:52.661 "params": { 00:39:52.661 "admin_cmd_passthru": { 00:39:52.661 "identify_ctrlr": true 00:39:52.661 } 00:39:52.661 } 00:39:52.661 } 00:39:52.661 00:39:52.661 INFO: response: 00:39:52.661 { 00:39:52.661 "jsonrpc": "2.0", 00:39:52.661 "id": 1, 00:39:52.661 "result": true 00:39:52.661 } 00:39:52.661 00:39:52.661 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.661 19:44:11 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:39:52.661 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.661 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:52.661 INFO: Setting log level to 20 00:39:52.661 INFO: Setting log level to 20 00:39:52.661 INFO: Log level set to 20 00:39:52.661 INFO: Log level set to 20 00:39:52.661 INFO: Requests: 00:39:52.661 { 00:39:52.661 "jsonrpc": "2.0", 00:39:52.661 "method": "framework_start_init", 00:39:52.661 "id": 1 00:39:52.661 } 00:39:52.661 00:39:52.661 INFO: Requests: 00:39:52.661 { 00:39:52.661 "jsonrpc": "2.0", 00:39:52.661 "method": "framework_start_init", 00:39:52.661 "id": 1 00:39:52.661 } 00:39:52.661 00:39:52.922 [2024-07-22 19:44:11.782801] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:39:52.922 INFO: response: 00:39:52.922 { 00:39:52.922 "jsonrpc": "2.0", 00:39:52.922 "id": 1, 00:39:52.922 "result": true 00:39:52.922 } 00:39:52.922 00:39:52.922 INFO: response: 00:39:52.922 { 00:39:52.922 "jsonrpc": "2.0", 00:39:52.922 "id": 1, 00:39:52.922 "result": true 00:39:52.922 } 00:39:52.922 00:39:52.922 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.922 19:44:11 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:52.922 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.922 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:52.922 INFO: Setting log level to 40 00:39:52.922 INFO: Setting log level to 40 00:39:52.922 INFO: Setting log level to 40 00:39:52.922 [2024-07-22 19:44:11.798207] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:52.922 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.922 19:44:11 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:39:52.922 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:52.922 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:52.922 19:44:11 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:39:52.922 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.922 19:44:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:53.493 Nvme0n1 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.493 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.493 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.493 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:53.493 [2024-07-22 19:44:12.215210] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.493 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:53.493 [ 00:39:53.493 { 00:39:53.493 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:53.493 "subtype": "Discovery", 00:39:53.493 "listen_addresses": [], 00:39:53.493 "allow_any_host": true, 00:39:53.493 "hosts": [] 00:39:53.493 }, 00:39:53.493 { 00:39:53.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:53.493 "subtype": "NVMe", 00:39:53.493 "listen_addresses": [ 00:39:53.493 { 00:39:53.493 "trtype": "TCP", 00:39:53.493 "adrfam": "IPv4", 00:39:53.493 "traddr": "10.0.0.2", 00:39:53.493 "trsvcid": "4420" 00:39:53.493 } 00:39:53.493 ], 00:39:53.493 "allow_any_host": true, 00:39:53.493 "hosts": [], 00:39:53.493 "serial_number": "SPDK00000000000001", 00:39:53.493 "model_number": "SPDK bdev Controller", 00:39:53.493 "max_namespaces": 1, 00:39:53.493 "min_cntlid": 1, 00:39:53.493 "max_cntlid": 65519, 00:39:53.493 "namespaces": [ 00:39:53.493 { 00:39:53.493 "nsid": 1, 00:39:53.493 "bdev_name": "Nvme0n1", 00:39:53.493 "name": "Nvme0n1", 00:39:53.493 "nguid": "36344730526054870025384500000044", 00:39:53.493 "uuid": "36344730-5260-5487-0025-384500000044" 00:39:53.493 } 00:39:53.493 ] 00:39:53.493 } 00:39:53.493 ] 00:39:53.493 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.493 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:53.493 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:39:53.493 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:39:53.493 EAL: No free 2048 kB hugepages reported on node 1 00:39:53.754 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:39:53.754 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:53.754 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:39:53.754 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:39:53.754 EAL: No free 2048 kB hugepages reported on node 1 00:39:54.014 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:39:54.014 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:39:54.014 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:39:54.014 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:54.014 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:39:54.014 19:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:54.014 rmmod nvme_tcp 00:39:54.014 rmmod nvme_fabrics 00:39:54.014 rmmod nvme_keyring 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3195376 ']' 00:39:54.014 19:44:12 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3195376 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3195376 ']' 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3195376 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3195376 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:54.014 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3195376' 00:39:54.014 killing process with pid 3195376 00:39:54.015 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3195376 00:39:54.015 19:44:12 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3195376 00:39:54.957 19:44:13 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:54.957 19:44:13 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:54.957 19:44:13 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:54.957 19:44:13 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:54.957 19:44:13 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:54.957 19:44:13 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.957 19:44:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:54.957 19:44:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:57.502 19:44:15 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:57.502 00:39:57.502 real 0m13.620s 00:39:57.502 user 0m12.424s 00:39:57.502 sys 0m6.162s 00:39:57.502 19:44:15 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:57.502 19:44:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:57.502 ************************************ 00:39:57.502 END TEST nvmf_identify_passthru 00:39:57.502 ************************************ 00:39:57.502 19:44:15 -- common/autotest_common.sh@1142 -- # return 0 00:39:57.502 19:44:15 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:57.502 19:44:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:57.502 19:44:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:57.502 19:44:15 -- common/autotest_common.sh@10 -- # set +x 00:39:57.502 ************************************ 00:39:57.502 START TEST nvmf_dif 00:39:57.502 ************************************ 00:39:57.502 19:44:16 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:57.502 * Looking for test storage... 00:39:57.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:57.502 19:44:16 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:57.502 19:44:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:57.503 19:44:16 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:57.503 19:44:16 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:57.503 19:44:16 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:57.503 19:44:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.503 19:44:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.503 19:44:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.503 19:44:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:57.503 19:44:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:57.503 19:44:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:57.503 19:44:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:57.503 19:44:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:57.503 19:44:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:57.503 19:44:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:57.503 19:44:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:57.503 19:44:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:57.503 19:44:16 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:39:57.503 19:44:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:04.081 19:44:22 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:04.082 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:04.082 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:04.082 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:04.082 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:04.082 19:44:22 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:04.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:04.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:40:04.341 00:40:04.341 --- 10.0.0.2 ping statistics --- 00:40:04.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.341 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:04.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:04.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:40:04.341 00:40:04.341 --- 10.0.0.1 ping statistics --- 00:40:04.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.341 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:40:04.341 19:44:23 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:07.638 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:07.638 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:07.638 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:07.899 19:44:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:07.899 19:44:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:07.899 19:44:26 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:07.899 19:44:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3201543 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3201543 00:40:07.899 19:44:26 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:07.899 19:44:26 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3201543 ']' 00:40:07.899 19:44:26 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:07.899 19:44:26 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:07.899 19:44:26 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:07.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:07.899 19:44:26 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:07.899 19:44:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:07.899 [2024-07-22 19:44:26.786984] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:07.900 [2024-07-22 19:44:26.787092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:08.160 EAL: No free 2048 kB hugepages reported on node 1 00:40:08.160 [2024-07-22 19:44:26.916374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.160 [2024-07-22 19:44:27.096710] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:08.160 [2024-07-22 19:44:27.096754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:08.160 [2024-07-22 19:44:27.096766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:08.160 [2024-07-22 19:44:27.096776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:08.161 [2024-07-22 19:44:27.096786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:08.161 [2024-07-22 19:44:27.096817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:40:08.776 19:44:27 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:08.776 19:44:27 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:08.776 19:44:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:08.776 19:44:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:08.776 [2024-07-22 19:44:27.569077] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.776 19:44:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:08.776 19:44:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:08.776 ************************************ 00:40:08.776 START TEST fio_dif_1_default 00:40:08.776 ************************************ 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:08.776 bdev_null0 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:08.776 [2024-07-22 19:44:27.653466] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:08.776 19:44:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:08.776 { 00:40:08.776 "params": { 00:40:08.776 "name": "Nvme$subsystem", 00:40:08.776 "trtype": "$TEST_TRANSPORT", 00:40:08.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:08.776 "adrfam": "ipv4", 00:40:08.776 "trsvcid": "$NVMF_PORT", 00:40:08.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:08.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:08.777 "hdgst": ${hdgst:-false}, 00:40:08.777 "ddgst": ${ddgst:-false} 00:40:08.777 }, 00:40:08.777 "method": "bdev_nvme_attach_controller" 00:40:08.777 } 00:40:08.777 EOF 00:40:08.777 )") 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:08.777 "params": { 00:40:08.777 "name": "Nvme0", 00:40:08.777 "trtype": "tcp", 00:40:08.777 "traddr": "10.0.0.2", 00:40:08.777 "adrfam": "ipv4", 00:40:08.777 "trsvcid": "4420", 00:40:08.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:08.777 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:08.777 "hdgst": false, 00:40:08.777 "ddgst": false 00:40:08.777 }, 00:40:08.777 "method": "bdev_nvme_attach_controller" 00:40:08.777 }' 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:08.777 19:44:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:09.405 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:09.405 fio-3.35 00:40:09.405 Starting 1 thread 00:40:09.405 EAL: No free 2048 kB hugepages reported on node 1 00:40:21.643 00:40:21.643 filename0: (groupid=0, jobs=1): err= 0: pid=3202069: Mon Jul 22 19:44:38 2024 00:40:21.643 read: IOPS=190, BW=763KiB/s (782kB/s)(7664KiB/10041msec) 00:40:21.643 slat (nsec): min=5915, max=44815, avg=7496.93, stdev=2471.11 00:40:21.643 clat (usec): min=623, max=41841, avg=20940.58, stdev=20193.22 00:40:21.643 lat (usec): min=630, max=41870, avg=20948.08, stdev=20192.90 00:40:21.643 clat percentiles (usec): 00:40:21.643 | 1.00th=[ 725], 5.00th=[ 750], 10.00th=[ 766], 20.00th=[ 783], 00:40:21.643 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 1893], 60.00th=[41157], 00:40:21.643 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:21.643 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:40:21.643 | 99.99th=[41681] 00:40:21.643 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=764.80, stdev=14.31, samples=20 00:40:21.643 iops : min= 176, max= 192, avg=191.20, stdev= 3.58, samples=20 00:40:21.643 lat (usec) : 750=5.01%, 1000=44.73% 00:40:21.643 lat (msec) : 2=0.37%, 50=49.90% 00:40:21.643 cpu : usr=95.78%, sys=3.98%, ctx=14, majf=0, minf=1635 00:40:21.643 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:21.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.643 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.643 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:21.643 00:40:21.643 Run status group 0 (all jobs): 00:40:21.643 READ: bw=763KiB/s (782kB/s), 763KiB/s-763KiB/s (782kB/s-782kB/s), io=7664KiB (7848kB), run=10041-10041msec 00:40:21.643 ----------------------------------------------------- 00:40:21.643 Suppressions used: 00:40:21.643 count bytes template 00:40:21.643 1 8 /usr/src/fio/parse.c 00:40:21.643 1 8 libtcmalloc_minimal.so 00:40:21.643 1 904 libcrypto.so 00:40:21.643 ----------------------------------------------------- 00:40:21.643 00:40:21.643 19:44:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 00:40:21.644 real 0m12.127s 00:40:21.644 user 0m25.340s 00:40:21.644 sys 0m0.943s 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 ************************************ 00:40:21.644 END TEST fio_dif_1_default 00:40:21.644 ************************************ 00:40:21.644 19:44:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:21.644 19:44:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:21.644 19:44:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:21.644 19:44:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 ************************************ 00:40:21.644 START TEST fio_dif_1_multi_subsystems 00:40:21.644 ************************************ 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 bdev_null0 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 [2024-07-22 19:44:39.856706] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 bdev_null1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:21.644 { 00:40:21.644 "params": { 00:40:21.644 "name": "Nvme$subsystem", 00:40:21.644 "trtype": "$TEST_TRANSPORT", 00:40:21.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:21.644 "adrfam": "ipv4", 00:40:21.644 "trsvcid": "$NVMF_PORT", 00:40:21.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:21.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:21.644 "hdgst": ${hdgst:-false}, 00:40:21.644 "ddgst": ${ddgst:-false} 00:40:21.644 }, 00:40:21.644 "method": "bdev_nvme_attach_controller" 00:40:21.644 } 00:40:21.644 EOF 00:40:21.644 )") 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:21.644 { 00:40:21.644 "params": { 00:40:21.644 "name": "Nvme$subsystem", 00:40:21.644 "trtype": "$TEST_TRANSPORT", 00:40:21.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:21.644 "adrfam": "ipv4", 00:40:21.644 "trsvcid": "$NVMF_PORT", 00:40:21.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:21.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:21.644 "hdgst": ${hdgst:-false}, 00:40:21.644 "ddgst": ${ddgst:-false} 00:40:21.644 }, 00:40:21.644 "method": "bdev_nvme_attach_controller" 00:40:21.644 } 00:40:21.644 EOF 00:40:21.644 )") 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:40:21.644 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:40:21.645 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:40:21.645 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:21.645 "params": { 00:40:21.645 "name": "Nvme0", 00:40:21.645 "trtype": "tcp", 00:40:21.645 "traddr": "10.0.0.2", 00:40:21.645 "adrfam": "ipv4", 00:40:21.645 "trsvcid": "4420", 00:40:21.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:21.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:21.645 "hdgst": false, 00:40:21.645 "ddgst": false 00:40:21.645 }, 00:40:21.645 "method": "bdev_nvme_attach_controller" 00:40:21.645 },{ 00:40:21.645 "params": { 00:40:21.645 "name": "Nvme1", 00:40:21.645 "trtype": "tcp", 00:40:21.645 "traddr": "10.0.0.2", 00:40:21.645 "adrfam": "ipv4", 00:40:21.645 "trsvcid": "4420", 00:40:21.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:21.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:21.645 "hdgst": false, 00:40:21.645 "ddgst": false 00:40:21.645 }, 00:40:21.645 "method": "bdev_nvme_attach_controller" 00:40:21.645 }' 00:40:21.645 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:21.645 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:21.645 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:40:21.645 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:21.645 19:44:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:21.645 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:21.645 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:21.645 fio-3.35 00:40:21.645 Starting 2 threads 00:40:21.645 EAL: No free 2048 kB hugepages reported on node 1 00:40:33.900 00:40:33.900 filename0: (groupid=0, jobs=1): err= 0: pid=3204588: Mon Jul 22 19:44:51 2024 00:40:33.900 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:40:33.900 slat (nsec): min=5946, max=49553, avg=7316.87, stdev=2243.58 00:40:33.900 clat (usec): min=40955, max=43009, avg=41985.62, stdev=130.89 00:40:33.900 lat (usec): min=40961, max=43019, avg=41992.93, stdev=131.33 00:40:33.900 clat percentiles (usec): 00:40:33.900 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:40:33.900 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:33.900 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:33.900 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:40:33.900 | 99.99th=[43254] 00:40:33.900 bw ( KiB/s): min= 352, max= 384, per=33.92%, avg=380.80, stdev= 9.85, samples=20 00:40:33.900 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:40:33.900 lat (msec) : 50=100.00% 00:40:33.900 cpu : usr=97.11%, sys=2.66%, ctx=12, majf=0, minf=1635 00:40:33.900 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:33.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.900 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.900 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:33.900 filename1: (groupid=0, jobs=1): err= 0: pid=3204589: Mon Jul 22 19:44:51 2024 00:40:33.900 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10022msec) 00:40:33.900 slat (nsec): min=5926, max=48665, avg=7603.83, stdev=2140.85 00:40:33.900 clat (usec): min=1003, max=46229, avg=21575.43, stdev=20273.76 00:40:33.900 lat (usec): min=1009, max=46277, avg=21583.03, stdev=20273.51 00:40:33.900 clat percentiles (usec): 00:40:33.900 | 1.00th=[ 1057], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1172], 00:40:33.900 | 30.00th=[ 1254], 40.00th=[ 1303], 50.00th=[41681], 60.00th=[41681], 00:40:33.900 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:40:33.900 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:40:33.900 | 99.99th=[46400] 00:40:33.900 bw ( KiB/s): min= 673, max= 768, per=66.05%, avg=740.85, stdev=34.76, samples=20 00:40:33.900 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:40:33.900 lat (msec) : 2=49.78%, 50=50.22% 00:40:33.900 cpu : usr=96.94%, sys=2.81%, ctx=12, majf=0, minf=1638 00:40:33.900 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:33.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.900 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.900 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:33.900 00:40:33.900 Run status group 0 (all jobs): 00:40:33.900 READ: bw=1120KiB/s (1147kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10022-10040msec 00:40:33.900 ----------------------------------------------------- 00:40:33.900 Suppressions used: 00:40:33.900 count bytes template 00:40:33.900 2 16 /usr/src/fio/parse.c 00:40:33.900 1 8 libtcmalloc_minimal.so 00:40:33.900 1 904 libcrypto.so 00:40:33.900 ----------------------------------------------------- 00:40:33.900 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:33.900 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:33.901 00:40:33.901 real 0m12.539s 00:40:33.901 user 0m37.670s 00:40:33.901 sys 0m1.122s 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:33.901 19:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:33.901 ************************************ 00:40:33.901 END TEST fio_dif_1_multi_subsystems 00:40:33.901 ************************************ 00:40:33.901 19:44:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:33.901 19:44:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:33.901 19:44:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:33.901 19:44:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:33.901 19:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.901 ************************************ 00:40:33.901 START TEST fio_dif_rand_params 00:40:33.901 ************************************ 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.901 bdev_null0 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.901 [2024-07-22 19:44:52.474706] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:33.901 { 00:40:33.901 "params": { 00:40:33.901 "name": "Nvme$subsystem", 00:40:33.901 "trtype": "$TEST_TRANSPORT", 00:40:33.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.901 "adrfam": "ipv4", 00:40:33.901 "trsvcid": "$NVMF_PORT", 00:40:33.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.901 "hdgst": ${hdgst:-false}, 00:40:33.901 "ddgst": ${ddgst:-false} 00:40:33.901 }, 00:40:33.901 "method": "bdev_nvme_attach_controller" 00:40:33.901 } 00:40:33.901 EOF 00:40:33.901 )") 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:33.901 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:33.902 "params": { 00:40:33.902 "name": "Nvme0", 00:40:33.902 "trtype": "tcp", 00:40:33.902 "traddr": "10.0.0.2", 00:40:33.902 "adrfam": "ipv4", 00:40:33.902 "trsvcid": "4420", 00:40:33.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:33.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:33.902 "hdgst": false, 00:40:33.902 "ddgst": false 00:40:33.902 }, 00:40:33.902 "method": "bdev_nvme_attach_controller" 00:40:33.902 }' 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:33.902 19:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:34.166 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:34.166 ... 00:40:34.166 fio-3.35 00:40:34.166 Starting 3 threads 00:40:34.166 EAL: No free 2048 kB hugepages reported on node 1 00:40:40.755 00:40:40.755 filename0: (groupid=0, jobs=1): err= 0: pid=3207108: Mon Jul 22 19:44:58 2024 00:40:40.755 read: IOPS=162, BW=20.3MiB/s (21.3MB/s)(102MiB/5016msec) 00:40:40.755 slat (nsec): min=5981, max=38857, avg=8830.75, stdev=1756.76 00:40:40.755 clat (usec): min=6746, max=95031, avg=18471.98, stdev=16241.35 00:40:40.755 lat (usec): min=6753, max=95041, avg=18480.81, stdev=16241.26 00:40:40.755 clat percentiles (usec): 00:40:40.755 | 1.00th=[ 7308], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9765], 00:40:40.755 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11994], 60.00th=[12780], 00:40:40.755 | 70.00th=[13829], 80.00th=[15926], 90.00th=[51643], 95.00th=[53216], 00:40:40.755 | 99.00th=[54789], 99.50th=[93848], 99.90th=[94897], 99.95th=[94897], 00:40:40.755 | 99.99th=[94897] 00:40:40.755 bw ( KiB/s): min=12288, max=26624, per=32.56%, avg=20761.60, stdev=5012.85, samples=10 00:40:40.755 iops : min= 96, max= 208, avg=162.20, stdev=39.16, samples=10 00:40:40.755 lat (msec) : 10=23.59%, 20=59.83%, 50=3.07%, 100=13.51% 00:40:40.755 cpu : usr=95.99%, sys=3.75%, ctx=17, majf=0, minf=1635 00:40:40.755 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:40.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.755 issued rwts: total=814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.755 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:40.755 filename0: (groupid=0, jobs=1): err= 0: pid=3207109: Mon Jul 22 19:44:58 2024 00:40:40.755 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(113MiB/5035msec) 00:40:40.755 slat (nsec): min=6005, max=44479, avg=10108.17, stdev=2133.29 00:40:40.755 clat (usec): min=6129, max=91301, avg=16711.98, stdev=14143.08 00:40:40.755 lat (usec): min=6140, max=91312, avg=16722.08, stdev=14143.11 00:40:40.755 clat percentiles (usec): 00:40:40.755 | 1.00th=[ 6718], 5.00th=[ 7504], 10.00th=[ 8160], 20.00th=[ 9372], 00:40:40.755 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11600], 60.00th=[12518], 00:40:40.755 | 70.00th=[13829], 80.00th=[15139], 90.00th=[50070], 95.00th=[52167], 00:40:40.755 | 99.00th=[54789], 99.50th=[56886], 99.90th=[91751], 99.95th=[91751], 00:40:40.755 | 99.99th=[91751] 00:40:40.755 bw ( KiB/s): min=15360, max=39424, per=36.12%, avg=23035.70, stdev=7520.17, samples=10 00:40:40.755 iops : min= 120, max= 308, avg=179.90, stdev=58.77, samples=10 00:40:40.755 lat (msec) : 10=29.13%, 20=57.70%, 50=2.99%, 100=10.19% 00:40:40.755 cpu : usr=96.19%, sys=3.54%, ctx=12, majf=0, minf=1640 00:40:40.755 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:40.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.755 issued rwts: total=903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.755 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:40.755 filename0: (groupid=0, jobs=1): err= 0: pid=3207110: Mon Jul 22 19:44:58 2024 00:40:40.755 read: IOPS=157, BW=19.7MiB/s (20.6MB/s)(99.0MiB/5036msec) 00:40:40.755 slat (nsec): min=5988, max=45963, avg=10462.53, stdev=2320.42 00:40:40.755 clat (usec): min=7186, max=93319, avg=19058.69, stdev=16916.34 00:40:40.755 lat (usec): min=7198, max=93330, avg=19069.15, stdev=16916.29 00:40:40.755 clat percentiles (usec): 00:40:40.755 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10028], 00:40:40.755 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12125], 60.00th=[13173], 00:40:40.755 | 70.00th=[14222], 80.00th=[16057], 90.00th=[51643], 95.00th=[53216], 00:40:40.755 | 99.00th=[91751], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:40:40.755 | 99.99th=[92799] 00:40:40.755 bw ( KiB/s): min=12032, max=28672, per=31.67%, avg=20198.40, stdev=5104.97, samples=10 00:40:40.755 iops : min= 94, max= 224, avg=157.80, stdev=39.88, samples=10 00:40:40.755 lat (msec) : 10=20.20%, 20=62.88%, 50=1.52%, 100=15.40% 00:40:40.755 cpu : usr=95.89%, sys=3.83%, ctx=13, majf=0, minf=1633 00:40:40.755 IO depths : 1=4.8%, 2=95.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:40.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.755 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.755 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:40.755 00:40:40.755 Run status group 0 (all jobs): 00:40:40.755 READ: bw=62.3MiB/s (65.3MB/s), 19.7MiB/s-22.4MiB/s (20.6MB/s-23.5MB/s), io=314MiB (329MB), run=5016-5036msec 00:40:40.755 ----------------------------------------------------- 00:40:40.755 Suppressions used: 00:40:40.755 count bytes template 00:40:40.755 5 44 /usr/src/fio/parse.c 00:40:40.755 1 8 libtcmalloc_minimal.so 00:40:40.755 1 904 libcrypto.so 00:40:40.755 ----------------------------------------------------- 00:40:40.755 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:41.017 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 bdev_null0 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 [2024-07-22 19:44:59.779671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 bdev_null1 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 bdev_null2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:41.018 { 00:40:41.018 "params": { 00:40:41.018 "name": "Nvme$subsystem", 00:40:41.018 "trtype": "$TEST_TRANSPORT", 00:40:41.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:41.018 "adrfam": "ipv4", 00:40:41.018 "trsvcid": "$NVMF_PORT", 00:40:41.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:41.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:41.018 "hdgst": ${hdgst:-false}, 00:40:41.018 "ddgst": ${ddgst:-false} 00:40:41.018 }, 00:40:41.018 "method": "bdev_nvme_attach_controller" 00:40:41.018 } 00:40:41.018 EOF 00:40:41.018 )") 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:41.018 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:41.019 { 00:40:41.019 "params": { 00:40:41.019 "name": "Nvme$subsystem", 00:40:41.019 "trtype": "$TEST_TRANSPORT", 00:40:41.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:41.019 "adrfam": "ipv4", 00:40:41.019 "trsvcid": "$NVMF_PORT", 00:40:41.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:41.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:41.019 "hdgst": ${hdgst:-false}, 00:40:41.019 "ddgst": ${ddgst:-false} 00:40:41.019 }, 00:40:41.019 "method": "bdev_nvme_attach_controller" 00:40:41.019 } 00:40:41.019 EOF 00:40:41.019 )") 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:41.019 { 00:40:41.019 "params": { 00:40:41.019 "name": "Nvme$subsystem", 00:40:41.019 "trtype": "$TEST_TRANSPORT", 00:40:41.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:41.019 "adrfam": "ipv4", 00:40:41.019 "trsvcid": "$NVMF_PORT", 00:40:41.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:41.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:41.019 "hdgst": ${hdgst:-false}, 00:40:41.019 "ddgst": ${ddgst:-false} 00:40:41.019 }, 00:40:41.019 "method": "bdev_nvme_attach_controller" 00:40:41.019 } 00:40:41.019 EOF 00:40:41.019 )") 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:41.019 "params": { 00:40:41.019 "name": "Nvme0", 00:40:41.019 "trtype": "tcp", 00:40:41.019 "traddr": "10.0.0.2", 00:40:41.019 "adrfam": "ipv4", 00:40:41.019 "trsvcid": "4420", 00:40:41.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:41.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:41.019 "hdgst": false, 00:40:41.019 "ddgst": false 00:40:41.019 }, 00:40:41.019 "method": "bdev_nvme_attach_controller" 00:40:41.019 },{ 00:40:41.019 "params": { 00:40:41.019 "name": "Nvme1", 00:40:41.019 "trtype": "tcp", 00:40:41.019 "traddr": "10.0.0.2", 00:40:41.019 "adrfam": "ipv4", 00:40:41.019 "trsvcid": "4420", 00:40:41.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:41.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:41.019 "hdgst": false, 00:40:41.019 "ddgst": false 00:40:41.019 }, 00:40:41.019 "method": "bdev_nvme_attach_controller" 00:40:41.019 },{ 00:40:41.019 "params": { 00:40:41.019 "name": "Nvme2", 00:40:41.019 "trtype": "tcp", 00:40:41.019 "traddr": "10.0.0.2", 00:40:41.019 "adrfam": "ipv4", 00:40:41.019 "trsvcid": "4420", 00:40:41.019 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:41.019 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:41.019 "hdgst": false, 00:40:41.019 "ddgst": false 00:40:41.019 }, 00:40:41.019 "method": "bdev_nvme_attach_controller" 00:40:41.019 }' 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:41.019 19:44:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:41.646 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:41.646 ... 00:40:41.647 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:41.647 ... 00:40:41.647 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:41.647 ... 00:40:41.647 fio-3.35 00:40:41.647 Starting 24 threads 00:40:41.647 EAL: No free 2048 kB hugepages reported on node 1 00:40:53.881 00:40:53.881 filename0: (groupid=0, jobs=1): err= 0: pid=3208695: Mon Jul 22 19:45:11 2024 00:40:53.881 read: IOPS=442, BW=1772KiB/s (1814kB/s)(17.3MiB/10005msec) 00:40:53.881 slat (nsec): min=6437, max=78546, avg=16921.13, stdev=9277.15 00:40:53.881 clat (usec): min=4607, max=56648, avg=35976.91, stdev=3700.57 00:40:53.881 lat (usec): min=4622, max=56677, avg=35993.83, stdev=3701.02 00:40:53.881 clat percentiles (usec): 00:40:53.881 | 1.00th=[12649], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:40:53.881 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.881 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.881 | 99.00th=[39060], 99.50th=[46924], 99.90th=[50594], 99.95th=[56361], 00:40:53.881 | 99.99th=[56886] 00:40:53.881 bw ( KiB/s): min= 1664, max= 2048, per=4.19%, avg=1771.58, stdev=97.84, samples=19 00:40:53.881 iops : min= 416, max= 512, avg=442.89, stdev=24.46, samples=19 00:40:53.881 lat (msec) : 10=0.36%, 20=0.93%, 50=98.44%, 100=0.27% 00:40:53.881 cpu : usr=98.81%, sys=0.86%, ctx=18, majf=0, minf=1633 00:40:53.881 IO depths : 1=5.6%, 2=11.7%, 4=24.5%, 8=51.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:40:53.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.881 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.881 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.881 filename0: (groupid=0, jobs=1): err= 0: pid=3208696: Mon Jul 22 19:45:11 2024 00:40:53.881 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10008msec) 00:40:53.881 slat (nsec): min=3067, max=31196, avg=8166.31, stdev=2447.33 00:40:53.881 clat (usec): min=4678, max=38634, avg=34315.93, stdev=5315.96 00:40:53.881 lat (usec): min=4687, max=38643, avg=34324.10, stdev=5316.16 00:40:53.881 clat percentiles (usec): 00:40:53.881 | 1.00th=[10028], 5.00th=[23462], 10.00th=[25297], 20.00th=[35914], 00:40:53.881 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.881 | 70.00th=[36439], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.881 | 99.00th=[38011], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:40:53.881 | 99.99th=[38536] 00:40:53.881 bw ( KiB/s): min= 1664, max= 2192, per=4.41%, avg=1865.68, stdev=167.94, samples=19 00:40:53.881 iops : min= 416, max= 548, avg=466.42, stdev=41.99, samples=19 00:40:53.881 lat (msec) : 10=1.01%, 20=0.56%, 50=98.43% 00:40:53.881 cpu : usr=99.17%, sys=0.54%, ctx=15, majf=0, minf=1637 00:40:53.881 IO depths : 1=5.8%, 2=12.0%, 4=24.7%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:40:53.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.881 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.881 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.881 filename0: (groupid=0, jobs=1): err= 0: pid=3208697: Mon Jul 22 19:45:11 2024 00:40:53.881 read: IOPS=434, BW=1739KiB/s (1780kB/s)(17.0MiB/10012msec) 00:40:53.881 slat (nsec): min=6547, max=71804, avg=20594.72, stdev=10040.39 00:40:53.881 clat (usec): min=21089, max=71746, avg=36627.33, stdev=2667.41 00:40:53.881 lat (usec): min=21096, max=71776, avg=36647.93, stdev=2666.76 00:40:53.881 clat percentiles (usec): 00:40:53.881 | 1.00th=[29492], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.881 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.881 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.881 | 99.00th=[43779], 99.50th=[49021], 99.90th=[71828], 99.95th=[71828], 00:40:53.881 | 99.99th=[71828] 00:40:53.881 bw ( KiB/s): min= 1536, max= 1792, per=4.10%, avg=1731.16, stdev=78.14, samples=19 00:40:53.881 iops : min= 384, max= 448, avg=432.79, stdev=19.54, samples=19 00:40:53.881 lat (msec) : 50=99.54%, 100=0.46% 00:40:53.881 cpu : usr=98.89%, sys=0.77%, ctx=26, majf=0, minf=1636 00:40:53.881 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:40:53.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.881 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.881 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.881 filename0: (groupid=0, jobs=1): err= 0: pid=3208698: Mon Jul 22 19:45:11 2024 00:40:53.881 read: IOPS=437, BW=1750KiB/s (1792kB/s)(17.1MiB/10014msec) 00:40:53.881 slat (nsec): min=6300, max=59240, avg=14894.03, stdev=7873.65 00:40:53.881 clat (usec): min=16303, max=73690, avg=36452.86, stdev=2939.01 00:40:53.881 lat (usec): min=16326, max=73722, avg=36467.76, stdev=2939.21 00:40:53.881 clat percentiles (usec): 00:40:53.881 | 1.00th=[24249], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.881 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.881 | 70.00th=[36963], 80.00th=[36963], 90.00th=[38011], 95.00th=[38011], 00:40:53.881 | 99.00th=[40109], 99.50th=[52167], 99.90th=[73925], 99.95th=[73925], 00:40:53.881 | 99.99th=[73925] 00:40:53.881 bw ( KiB/s): min= 1660, max= 2016, per=4.12%, avg=1742.95, stdev=92.18, samples=19 00:40:53.881 iops : min= 415, max= 504, avg=435.74, stdev=23.05, samples=19 00:40:53.881 lat (msec) : 20=0.21%, 50=99.11%, 100=0.68% 00:40:53.882 cpu : usr=98.98%, sys=0.71%, ctx=21, majf=0, minf=1634 00:40:53.882 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:53.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 issued rwts: total=4380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.882 filename0: (groupid=0, jobs=1): err= 0: pid=3208699: Mon Jul 22 19:45:11 2024 00:40:53.882 read: IOPS=435, BW=1743KiB/s (1785kB/s)(17.1MiB/10022msec) 00:40:53.882 slat (nsec): min=4320, max=97860, avg=21858.05, stdev=14285.96 00:40:53.882 clat (usec): min=22147, max=70952, avg=36507.40, stdev=1831.73 00:40:53.882 lat (usec): min=22159, max=70972, avg=36529.26, stdev=1830.21 00:40:53.882 clat percentiles (usec): 00:40:53.882 | 1.00th=[34341], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.882 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.882 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.882 | 99.00th=[39060], 99.50th=[49546], 99.90th=[51119], 99.95th=[70779], 00:40:53.882 | 99.99th=[70779] 00:40:53.882 bw ( KiB/s): min= 1664, max= 1792, per=4.11%, avg=1738.05, stdev=64.56, samples=19 00:40:53.882 iops : min= 416, max= 448, avg=434.47, stdev=16.19, samples=19 00:40:53.882 lat (msec) : 50=99.79%, 100=0.21% 00:40:53.882 cpu : usr=97.07%, sys=1.76%, ctx=57, majf=0, minf=1634 00:40:53.882 IO depths : 1=6.0%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:53.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.882 filename0: (groupid=0, jobs=1): err= 0: pid=3208700: Mon Jul 22 19:45:11 2024 00:40:53.882 read: IOPS=437, BW=1750KiB/s (1792kB/s)(17.1MiB/10023msec) 00:40:53.882 slat (usec): min=6, max=106, avg=18.53, stdev=13.66 00:40:53.882 clat (usec): min=20598, max=52976, avg=36432.29, stdev=2962.16 00:40:53.882 lat (usec): min=20607, max=52994, avg=36450.82, stdev=2962.98 00:40:53.882 clat percentiles (usec): 00:40:53.882 | 1.00th=[23725], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.882 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.882 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38536], 00:40:53.882 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:40:53.882 | 99.99th=[53216] 00:40:53.882 bw ( KiB/s): min= 1664, max= 1792, per=4.14%, avg=1747.00, stdev=62.49, samples=20 00:40:53.882 iops : min= 416, max= 448, avg=436.75, stdev=15.62, samples=20 00:40:53.882 lat (msec) : 50=99.68%, 100=0.32% 00:40:53.882 cpu : usr=98.71%, sys=0.91%, ctx=101, majf=0, minf=1637 00:40:53.882 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:40:53.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.882 filename0: (groupid=0, jobs=1): err= 0: pid=3208701: Mon Jul 22 19:45:11 2024 00:40:53.882 read: IOPS=437, BW=1749KiB/s (1790kB/s)(17.1MiB/10029msec) 00:40:53.882 slat (nsec): min=6338, max=99987, avg=23183.03, stdev=14763.22 00:40:53.882 clat (usec): min=12186, max=59247, avg=36402.75, stdev=2422.28 00:40:53.882 lat (usec): min=12195, max=59254, avg=36425.94, stdev=2421.15 00:40:53.882 clat percentiles (usec): 00:40:53.882 | 1.00th=[23987], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:40:53.882 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.882 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.882 | 99.00th=[40633], 99.50th=[48497], 99.90th=[56886], 99.95th=[58983], 00:40:53.882 | 99.99th=[59507] 00:40:53.882 bw ( KiB/s): min= 1648, max= 1792, per=4.14%, avg=1747.00, stdev=62.71, samples=20 00:40:53.882 iops : min= 412, max= 448, avg=436.75, stdev=15.68, samples=20 00:40:53.882 lat (msec) : 20=0.07%, 50=99.66%, 100=0.27% 00:40:53.882 cpu : usr=99.06%, sys=0.61%, ctx=15, majf=0, minf=1635 00:40:53.882 IO depths : 1=5.1%, 2=11.2%, 4=24.7%, 8=51.6%, 16=7.4%, 32=0.0%, >=64=0.0% 00:40:53.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.882 filename0: (groupid=0, jobs=1): err= 0: pid=3208702: Mon Jul 22 19:45:11 2024 00:40:53.882 read: IOPS=434, BW=1739KiB/s (1780kB/s)(17.0MiB/10012msec) 00:40:53.882 slat (usec): min=6, max=121, avg=20.72, stdev=10.67 00:40:53.882 clat (usec): min=24466, max=71685, avg=36614.05, stdev=2426.39 00:40:53.882 lat (usec): min=24476, max=71711, avg=36634.77, stdev=2425.89 00:40:53.882 clat percentiles (usec): 00:40:53.882 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.882 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.882 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.882 | 99.00th=[39060], 99.50th=[45351], 99.90th=[71828], 99.95th=[71828], 00:40:53.882 | 99.99th=[71828] 00:40:53.882 bw ( KiB/s): min= 1536, max= 1792, per=4.10%, avg=1731.16, stdev=78.14, samples=19 00:40:53.882 iops : min= 384, max= 448, avg=432.79, stdev=19.54, samples=19 00:40:53.882 lat (msec) : 50=99.63%, 100=0.37% 00:40:53.882 cpu : usr=98.99%, sys=0.60%, ctx=52, majf=0, minf=1634 00:40:53.882 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:53.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.882 filename1: (groupid=0, jobs=1): err= 0: pid=3208703: Mon Jul 22 19:45:11 2024 00:40:53.882 read: IOPS=439, BW=1758KiB/s (1801kB/s)(17.2MiB/10046msec) 00:40:53.882 slat (nsec): min=6326, max=54974, avg=11733.93, stdev=6429.28 00:40:53.882 clat (usec): min=20419, max=51876, avg=36261.79, stdev=2817.96 00:40:53.882 lat (usec): min=20426, max=51886, avg=36273.52, stdev=2817.71 00:40:53.882 clat percentiles (usec): 00:40:53.882 | 1.00th=[21103], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.882 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.882 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38536], 00:40:53.882 | 99.00th=[39060], 99.50th=[47973], 99.90th=[51119], 99.95th=[51643], 00:40:53.882 | 99.99th=[51643] 00:40:53.882 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1759.80, stdev=70.33, samples=20 00:40:53.882 iops : min= 416, max= 480, avg=439.95, stdev=17.58, samples=20 00:40:53.882 lat (msec) : 50=99.73%, 100=0.27% 00:40:53.882 cpu : usr=98.82%, sys=0.75%, ctx=45, majf=0, minf=1636 00:40:53.882 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:40:53.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.882 filename1: (groupid=0, jobs=1): err= 0: pid=3208704: Mon Jul 22 19:45:11 2024 00:40:53.882 read: IOPS=435, BW=1743KiB/s (1785kB/s)(17.1MiB/10022msec) 00:40:53.882 slat (nsec): min=6339, max=98024, avg=22740.23, stdev=15665.15 00:40:53.882 clat (usec): min=21747, max=51797, avg=36498.48, stdev=1655.39 00:40:53.882 lat (usec): min=21757, max=51814, avg=36521.22, stdev=1653.75 00:40:53.882 clat percentiles (usec): 00:40:53.882 | 1.00th=[34341], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.882 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.882 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.882 | 99.00th=[39584], 99.50th=[49546], 99.90th=[49546], 99.95th=[50594], 00:40:53.882 | 99.99th=[51643] 00:40:53.882 bw ( KiB/s): min= 1664, max= 1792, per=4.11%, avg=1737.89, stdev=64.75, samples=19 00:40:53.882 iops : min= 416, max= 448, avg=434.47, stdev=16.19, samples=19 00:40:53.882 lat (msec) : 50=99.91%, 100=0.09% 00:40:53.882 cpu : usr=99.08%, sys=0.59%, ctx=14, majf=0, minf=1634 00:40:53.882 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:53.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.882 filename1: (groupid=0, jobs=1): err= 0: pid=3208705: Mon Jul 22 19:45:11 2024 00:40:53.882 read: IOPS=437, BW=1749KiB/s (1791kB/s)(17.1MiB/10024msec) 00:40:53.882 slat (nsec): min=6289, max=96956, avg=16650.92, stdev=10629.68 00:40:53.882 clat (usec): min=20558, max=47875, avg=36441.16, stdev=1641.92 00:40:53.882 lat (usec): min=20567, max=47883, avg=36457.81, stdev=1641.29 00:40:53.882 clat percentiles (usec): 00:40:53.882 | 1.00th=[28967], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.882 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.882 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.882 | 99.00th=[39060], 99.50th=[40633], 99.90th=[47449], 99.95th=[47973], 00:40:53.882 | 99.99th=[47973] 00:40:53.882 bw ( KiB/s): min= 1664, max= 1792, per=4.13%, avg=1746.80, stdev=62.35, samples=20 00:40:53.882 iops : min= 416, max= 448, avg=436.70, stdev=15.59, samples=20 00:40:53.882 lat (msec) : 50=100.00% 00:40:53.882 cpu : usr=97.80%, sys=1.15%, ctx=144, majf=0, minf=1636 00:40:53.882 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:53.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.883 filename1: (groupid=0, jobs=1): err= 0: pid=3208706: Mon Jul 22 19:45:11 2024 00:40:53.883 read: IOPS=434, BW=1738KiB/s (1780kB/s)(17.0MiB/10014msec) 00:40:53.883 slat (nsec): min=5881, max=57905, avg=13375.73, stdev=6673.27 00:40:53.883 clat (usec): min=16638, max=73648, avg=36657.68, stdev=1724.81 00:40:53.883 lat (usec): min=16648, max=73674, avg=36671.06, stdev=1724.77 00:40:53.883 clat percentiles (usec): 00:40:53.883 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:40:53.883 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.883 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.883 | 99.00th=[39060], 99.50th=[47449], 99.90th=[52691], 99.95th=[52691], 00:40:53.883 | 99.99th=[73925] 00:40:53.883 bw ( KiB/s): min= 1660, max= 1792, per=4.11%, avg=1737.89, stdev=65.19, samples=19 00:40:53.883 iops : min= 415, max= 448, avg=434.47, stdev=16.30, samples=19 00:40:53.883 lat (msec) : 20=0.05%, 50=99.49%, 100=0.46% 00:40:53.883 cpu : usr=95.15%, sys=2.70%, ctx=150, majf=0, minf=1631 00:40:53.883 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:53.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.883 filename1: (groupid=0, jobs=1): err= 0: pid=3208707: Mon Jul 22 19:45:11 2024 00:40:53.883 read: IOPS=433, BW=1734KiB/s (1776kB/s)(16.9MiB/10001msec) 00:40:53.883 slat (nsec): min=6171, max=77991, avg=20146.55, stdev=13144.52 00:40:53.883 clat (usec): min=22468, max=69745, avg=36726.08, stdev=3426.22 00:40:53.883 lat (usec): min=22483, max=69782, avg=36746.22, stdev=3424.87 00:40:53.883 clat percentiles (usec): 00:40:53.883 | 1.00th=[25560], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.883 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.883 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38536], 00:40:53.883 | 99.00th=[50594], 99.50th=[65799], 99.90th=[69731], 99.95th=[69731], 00:40:53.883 | 99.99th=[69731] 00:40:53.883 bw ( KiB/s): min= 1536, max= 1792, per=4.10%, avg=1731.16, stdev=75.92, samples=19 00:40:53.883 iops : min= 384, max= 448, avg=432.79, stdev=18.98, samples=19 00:40:53.883 lat (msec) : 50=98.52%, 100=1.48% 00:40:53.883 cpu : usr=98.86%, sys=0.81%, ctx=15, majf=0, minf=1632 00:40:53.883 IO depths : 1=5.4%, 2=11.0%, 4=23.8%, 8=52.6%, 16=7.3%, 32=0.0%, >=64=0.0% 00:40:53.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.883 filename1: (groupid=0, jobs=1): err= 0: pid=3208708: Mon Jul 22 19:45:11 2024 00:40:53.883 read: IOPS=441, BW=1767KiB/s (1810kB/s)(17.3MiB/10030msec) 00:40:53.883 slat (nsec): min=6434, max=57696, avg=12666.07, stdev=6316.52 00:40:53.883 clat (usec): min=7986, max=53711, avg=36100.85, stdev=3352.87 00:40:53.883 lat (usec): min=7998, max=53718, avg=36113.52, stdev=3353.03 00:40:53.883 clat percentiles (usec): 00:40:53.883 | 1.00th=[18220], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.883 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.883 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.883 | 99.00th=[39060], 99.50th=[46400], 99.90th=[51119], 99.95th=[51119], 00:40:53.883 | 99.99th=[53740] 00:40:53.883 bw ( KiB/s): min= 1664, max= 1920, per=4.18%, avg=1766.20, stdev=77.52, samples=20 00:40:53.883 iops : min= 416, max= 480, avg=441.55, stdev=19.38, samples=20 00:40:53.883 lat (msec) : 10=0.36%, 20=0.72%, 50=98.71%, 100=0.20% 00:40:53.883 cpu : usr=98.93%, sys=0.67%, ctx=52, majf=0, minf=1633 00:40:53.883 IO depths : 1=5.7%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:40:53.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.883 filename1: (groupid=0, jobs=1): err= 0: pid=3208709: Mon Jul 22 19:45:11 2024 00:40:53.883 read: IOPS=437, BW=1750KiB/s (1792kB/s)(17.1MiB/10023msec) 00:40:53.883 slat (nsec): min=6372, max=56861, avg=12468.47, stdev=6719.39 00:40:53.883 clat (usec): min=23206, max=48180, avg=36469.07, stdev=1757.95 00:40:53.883 lat (usec): min=23214, max=48213, avg=36481.54, stdev=1757.58 00:40:53.883 clat percentiles (usec): 00:40:53.883 | 1.00th=[25822], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.883 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.883 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.883 | 99.00th=[39060], 99.50th=[39060], 99.90th=[47973], 99.95th=[47973], 00:40:53.883 | 99.99th=[47973] 00:40:53.883 bw ( KiB/s): min= 1664, max= 1792, per=4.14%, avg=1747.00, stdev=62.49, samples=20 00:40:53.883 iops : min= 416, max= 448, avg=436.75, stdev=15.62, samples=20 00:40:53.883 lat (msec) : 50=100.00% 00:40:53.883 cpu : usr=99.06%, sys=0.61%, ctx=19, majf=0, minf=1637 00:40:53.883 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:53.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.883 filename1: (groupid=0, jobs=1): err= 0: pid=3208710: Mon Jul 22 19:45:11 2024 00:40:53.883 read: IOPS=451, BW=1806KiB/s (1850kB/s)(17.7MiB/10044msec) 00:40:53.883 slat (usec): min=6, max=108, avg=16.44, stdev=13.32 00:40:53.883 clat (usec): min=13994, max=81937, avg=35203.25, stdev=4968.27 00:40:53.883 lat (usec): min=14001, max=81972, avg=35219.70, stdev=4970.10 00:40:53.883 clat percentiles (usec): 00:40:53.883 | 1.00th=[20055], 5.00th=[24511], 10.00th=[26346], 20.00th=[35914], 00:40:53.883 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.883 | 70.00th=[36439], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.883 | 99.00th=[54264], 99.50th=[57934], 99.90th=[61604], 99.95th=[61604], 00:40:53.883 | 99.99th=[82314] 00:40:53.883 bw ( KiB/s): min= 1664, max= 2368, per=4.29%, avg=1811.80, stdev=188.83, samples=20 00:40:53.883 iops : min= 416, max= 592, avg=452.95, stdev=47.21, samples=20 00:40:53.883 lat (msec) : 20=1.01%, 50=97.93%, 100=1.06% 00:40:53.883 cpu : usr=98.72%, sys=0.85%, ctx=87, majf=0, minf=1631 00:40:53.883 IO depths : 1=4.6%, 2=9.9%, 4=22.0%, 8=55.5%, 16=7.9%, 32=0.0%, >=64=0.0% 00:40:53.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 issued rwts: total=4536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.883 filename2: (groupid=0, jobs=1): err= 0: pid=3208711: Mon Jul 22 19:45:11 2024 00:40:53.883 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10008msec) 00:40:53.883 slat (usec): min=6, max=122, avg=20.30, stdev=10.30 00:40:53.883 clat (usec): min=7097, max=57519, avg=36457.90, stdev=2394.15 00:40:53.883 lat (usec): min=7105, max=57552, avg=36478.20, stdev=2394.34 00:40:53.883 clat percentiles (usec): 00:40:53.883 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.883 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.883 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.883 | 99.00th=[39060], 99.50th=[40633], 99.90th=[57410], 99.95th=[57410], 00:40:53.883 | 99.99th=[57410] 00:40:53.883 bw ( KiB/s): min= 1660, max= 1792, per=4.11%, avg=1737.68, stdev=65.01, samples=19 00:40:53.883 iops : min= 415, max= 448, avg=434.42, stdev=16.25, samples=19 00:40:53.883 lat (msec) : 10=0.37%, 50=99.27%, 100=0.37% 00:40:53.883 cpu : usr=98.74%, sys=0.78%, ctx=184, majf=0, minf=1632 00:40:53.883 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:40:53.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.883 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.883 filename2: (groupid=0, jobs=1): err= 0: pid=3208712: Mon Jul 22 19:45:11 2024 00:40:53.883 read: IOPS=444, BW=1778KiB/s (1820kB/s)(17.4MiB/10008msec) 00:40:53.883 slat (nsec): min=6081, max=94228, avg=20475.58, stdev=13924.19 00:40:53.883 clat (usec): min=9330, max=61784, avg=35814.71, stdev=4837.97 00:40:53.883 lat (usec): min=9337, max=61813, avg=35835.18, stdev=4839.37 00:40:53.883 clat percentiles (usec): 00:40:53.883 | 1.00th=[22152], 5.00th=[25822], 10.00th=[30802], 20.00th=[35914], 00:40:53.883 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.883 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38536], 00:40:53.883 | 99.00th=[56361], 99.50th=[58459], 99.90th=[61604], 99.95th=[61604], 00:40:53.883 | 99.99th=[61604] 00:40:53.883 bw ( KiB/s): min= 1660, max= 1936, per=4.19%, avg=1772.37, stdev=77.06, samples=19 00:40:53.883 iops : min= 415, max= 484, avg=443.05, stdev=19.32, samples=19 00:40:53.884 lat (msec) : 10=0.36%, 20=0.54%, 50=97.30%, 100=1.80% 00:40:53.884 cpu : usr=98.99%, sys=0.68%, ctx=25, majf=0, minf=1636 00:40:53.884 IO depths : 1=4.2%, 2=9.1%, 4=20.4%, 8=57.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:40:53.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.884 filename2: (groupid=0, jobs=1): err= 0: pid=3208713: Mon Jul 22 19:45:11 2024 00:40:53.884 read: IOPS=449, BW=1799KiB/s (1842kB/s)(17.6MiB/10004msec) 00:40:53.884 slat (nsec): min=6320, max=98448, avg=15878.98, stdev=9988.93 00:40:53.884 clat (usec): min=12348, max=75642, avg=35462.84, stdev=5854.46 00:40:53.884 lat (usec): min=12363, max=75696, avg=35478.72, stdev=5855.48 00:40:53.884 clat percentiles (usec): 00:40:53.884 | 1.00th=[21890], 5.00th=[24249], 10.00th=[26608], 20.00th=[32637], 00:40:53.884 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.884 | 70.00th=[36963], 80.00th=[36963], 90.00th=[38536], 95.00th=[43254], 00:40:53.884 | 99.00th=[57410], 99.50th=[58459], 99.90th=[63701], 99.95th=[63701], 00:40:53.884 | 99.99th=[76022] 00:40:53.884 bw ( KiB/s): min= 1648, max= 2144, per=4.27%, avg=1803.58, stdev=115.75, samples=19 00:40:53.884 iops : min= 412, max= 536, avg=450.89, stdev=28.94, samples=19 00:40:53.884 lat (msec) : 20=0.62%, 50=96.89%, 100=2.49% 00:40:53.884 cpu : usr=97.74%, sys=1.21%, ctx=92, majf=0, minf=1636 00:40:53.884 IO depths : 1=1.2%, 2=4.1%, 4=13.1%, 8=68.4%, 16=13.2%, 32=0.0%, >=64=0.0% 00:40:53.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 complete : 0=0.0%, 4=91.5%, 8=4.7%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 issued rwts: total=4500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.884 filename2: (groupid=0, jobs=1): err= 0: pid=3208714: Mon Jul 22 19:45:11 2024 00:40:53.884 read: IOPS=438, BW=1756KiB/s (1798kB/s)(17.2MiB/10021msec) 00:40:53.884 slat (nsec): min=6292, max=93457, avg=21196.41, stdev=12223.87 00:40:53.884 clat (usec): min=13292, max=58480, avg=36264.68, stdev=3056.51 00:40:53.884 lat (usec): min=13313, max=58489, avg=36285.88, stdev=3057.05 00:40:53.884 clat percentiles (usec): 00:40:53.884 | 1.00th=[23987], 5.00th=[32637], 10.00th=[35390], 20.00th=[35914], 00:40:53.884 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.884 | 70.00th=[36963], 80.00th=[36963], 90.00th=[38011], 95.00th=[38536], 00:40:53.884 | 99.00th=[46400], 99.50th=[49546], 99.90th=[58459], 99.95th=[58459], 00:40:53.884 | 99.99th=[58459] 00:40:53.884 bw ( KiB/s): min= 1664, max= 1968, per=4.14%, avg=1750.53, stdev=84.28, samples=19 00:40:53.884 iops : min= 416, max= 492, avg=437.63, stdev=21.07, samples=19 00:40:53.884 lat (msec) : 20=0.18%, 50=99.36%, 100=0.45% 00:40:53.884 cpu : usr=98.99%, sys=0.66%, ctx=21, majf=0, minf=1634 00:40:53.884 IO depths : 1=5.0%, 2=10.3%, 4=22.2%, 8=54.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:40:53.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 issued rwts: total=4398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.884 filename2: (groupid=0, jobs=1): err= 0: pid=3208715: Mon Jul 22 19:45:11 2024 00:40:53.884 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10005msec) 00:40:53.884 slat (nsec): min=6240, max=91815, avg=14920.55, stdev=10309.50 00:40:53.884 clat (usec): min=7426, max=62078, avg=34419.93, stdev=6539.45 00:40:53.884 lat (usec): min=7433, max=62106, avg=34434.85, stdev=6540.68 00:40:53.884 clat percentiles (usec): 00:40:53.884 | 1.00th=[21103], 5.00th=[23200], 10.00th=[25035], 20.00th=[28705], 00:40:53.884 | 30.00th=[32375], 40.00th=[35914], 50.00th=[35914], 60.00th=[36439], 00:40:53.884 | 70.00th=[36439], 80.00th=[36963], 90.00th=[38536], 95.00th=[43779], 00:40:53.884 | 99.00th=[55837], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:40:53.884 | 99.99th=[62129] 00:40:53.884 bw ( KiB/s): min= 1648, max= 2032, per=4.36%, avg=1842.95, stdev=100.20, samples=19 00:40:53.884 iops : min= 412, max= 508, avg=460.74, stdev=25.05, samples=19 00:40:53.884 lat (msec) : 10=0.13%, 20=0.50%, 50=96.48%, 100=2.89% 00:40:53.884 cpu : usr=98.94%, sys=0.66%, ctx=60, majf=0, minf=1634 00:40:53.884 IO depths : 1=1.5%, 2=3.9%, 4=11.8%, 8=70.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:40:53.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 complete : 0=0.0%, 4=90.8%, 8=5.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 issued rwts: total=4636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.884 filename2: (groupid=0, jobs=1): err= 0: pid=3208716: Mon Jul 22 19:45:11 2024 00:40:53.884 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10008msec) 00:40:53.884 slat (usec): min=6, max=126, avg=18.90, stdev= 9.25 00:40:53.884 clat (usec): min=8195, max=61584, avg=36486.22, stdev=2546.24 00:40:53.884 lat (usec): min=8202, max=61619, avg=36505.12, stdev=2546.32 00:40:53.884 clat percentiles (usec): 00:40:53.884 | 1.00th=[34341], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:40:53.884 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:40:53.884 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:40:53.884 | 99.00th=[39060], 99.50th=[39584], 99.90th=[61604], 99.95th=[61604], 00:40:53.884 | 99.99th=[61604] 00:40:53.884 bw ( KiB/s): min= 1660, max= 1792, per=4.11%, avg=1737.84, stdev=64.82, samples=19 00:40:53.884 iops : min= 415, max= 448, avg=434.42, stdev=16.25, samples=19 00:40:53.884 lat (msec) : 10=0.37%, 50=99.27%, 100=0.37% 00:40:53.884 cpu : usr=99.08%, sys=0.61%, ctx=15, majf=0, minf=1633 00:40:53.884 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:53.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.884 filename2: (groupid=0, jobs=1): err= 0: pid=3208717: Mon Jul 22 19:45:11 2024 00:40:53.884 read: IOPS=443, BW=1773KiB/s (1816kB/s)(17.3MiB/10008msec) 00:40:53.884 slat (nsec): min=6300, max=70307, avg=17228.09, stdev=9805.27 00:40:53.884 clat (usec): min=17712, max=67027, avg=35955.78, stdev=4193.07 00:40:53.884 lat (usec): min=17723, max=67075, avg=35973.01, stdev=4194.33 00:40:53.884 clat percentiles (usec): 00:40:53.884 | 1.00th=[23200], 5.00th=[26346], 10.00th=[34341], 20.00th=[35914], 00:40:53.884 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.884 | 70.00th=[36439], 80.00th=[36963], 90.00th=[37487], 95.00th=[38536], 00:40:53.884 | 99.00th=[53740], 99.50th=[56886], 99.90th=[66847], 99.95th=[66847], 00:40:53.884 | 99.99th=[66847] 00:40:53.884 bw ( KiB/s): min= 1536, max= 2096, per=4.20%, avg=1773.26, stdev=112.38, samples=19 00:40:53.884 iops : min= 384, max= 524, avg=443.32, stdev=28.10, samples=19 00:40:53.884 lat (msec) : 20=0.27%, 50=98.69%, 100=1.04% 00:40:53.884 cpu : usr=98.89%, sys=0.72%, ctx=67, majf=0, minf=1634 00:40:53.884 IO depths : 1=4.4%, 2=9.9%, 4=22.3%, 8=55.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:40:53.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 issued rwts: total=4436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.884 filename2: (groupid=0, jobs=1): err= 0: pid=3208718: Mon Jul 22 19:45:11 2024 00:40:53.884 read: IOPS=443, BW=1773KiB/s (1816kB/s)(17.3MiB/10007msec) 00:40:53.884 slat (usec): min=4, max=108, avg=19.01, stdev=15.25 00:40:53.884 clat (usec): min=15842, max=60566, avg=35933.37, stdev=4221.52 00:40:53.884 lat (usec): min=15850, max=60589, avg=35952.38, stdev=4223.03 00:40:53.884 clat percentiles (usec): 00:40:53.884 | 1.00th=[22938], 5.00th=[25822], 10.00th=[34341], 20.00th=[35914], 00:40:53.884 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:40:53.884 | 70.00th=[36963], 80.00th=[36963], 90.00th=[38011], 95.00th=[38536], 00:40:53.884 | 99.00th=[49021], 99.50th=[54789], 99.90th=[60556], 99.95th=[60556], 00:40:53.884 | 99.99th=[60556] 00:40:53.884 bw ( KiB/s): min= 1644, max= 2032, per=4.20%, avg=1773.05, stdev=97.03, samples=19 00:40:53.884 iops : min= 411, max= 508, avg=443.26, stdev=24.26, samples=19 00:40:53.884 lat (msec) : 20=0.61%, 50=98.44%, 100=0.95% 00:40:53.884 cpu : usr=98.94%, sys=0.74%, ctx=19, majf=0, minf=1636 00:40:53.884 IO depths : 1=3.9%, 2=9.3%, 4=22.2%, 8=55.8%, 16=8.8%, 32=0.0%, >=64=0.0% 00:40:53.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.884 issued rwts: total=4436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:53.884 00:40:53.884 Run status group 0 (all jobs): 00:40:53.884 READ: bw=41.3MiB/s (43.3MB/s), 1734KiB/s-1861KiB/s (1776kB/s-1906kB/s), io=414MiB (435MB), run=10001-10046msec 00:40:53.884 ----------------------------------------------------- 00:40:53.884 Suppressions used: 00:40:53.884 count bytes template 00:40:53.884 45 402 /usr/src/fio/parse.c 00:40:53.884 1 8 libtcmalloc_minimal.so 00:40:53.884 1 904 libcrypto.so 00:40:53.884 ----------------------------------------------------- 00:40:53.884 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.884 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 bdev_null0 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 [2024-07-22 19:45:12.663913] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 bdev_null1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:53.885 { 00:40:53.885 "params": { 00:40:53.885 "name": "Nvme$subsystem", 00:40:53.885 "trtype": "$TEST_TRANSPORT", 00:40:53.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:53.885 "adrfam": "ipv4", 00:40:53.885 "trsvcid": "$NVMF_PORT", 00:40:53.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:53.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:53.885 "hdgst": ${hdgst:-false}, 00:40:53.885 "ddgst": ${ddgst:-false} 00:40:53.885 }, 00:40:53.885 "method": "bdev_nvme_attach_controller" 00:40:53.885 } 00:40:53.885 EOF 00:40:53.885 )") 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:53.885 { 00:40:53.885 "params": { 00:40:53.885 "name": "Nvme$subsystem", 00:40:53.885 "trtype": "$TEST_TRANSPORT", 00:40:53.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:53.885 "adrfam": "ipv4", 00:40:53.885 "trsvcid": "$NVMF_PORT", 00:40:53.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:53.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:53.885 "hdgst": ${hdgst:-false}, 00:40:53.885 "ddgst": ${ddgst:-false} 00:40:53.885 }, 00:40:53.885 "method": "bdev_nvme_attach_controller" 00:40:53.885 } 00:40:53.885 EOF 00:40:53.885 )") 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:53.885 19:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:53.886 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:53.886 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:53.886 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:53.886 19:45:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:53.886 "params": { 00:40:53.886 "name": "Nvme0", 00:40:53.886 "trtype": "tcp", 00:40:53.886 "traddr": "10.0.0.2", 00:40:53.886 "adrfam": "ipv4", 00:40:53.886 "trsvcid": "4420", 00:40:53.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:53.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:53.886 "hdgst": false, 00:40:53.886 "ddgst": false 00:40:53.886 }, 00:40:53.886 "method": "bdev_nvme_attach_controller" 00:40:53.886 },{ 00:40:53.886 "params": { 00:40:53.886 "name": "Nvme1", 00:40:53.886 "trtype": "tcp", 00:40:53.886 "traddr": "10.0.0.2", 00:40:53.886 "adrfam": "ipv4", 00:40:53.886 "trsvcid": "4420", 00:40:53.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:53.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:53.886 "hdgst": false, 00:40:53.886 "ddgst": false 00:40:53.886 }, 00:40:53.886 "method": "bdev_nvme_attach_controller" 00:40:53.886 }' 00:40:53.886 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:53.886 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:53.886 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:53.886 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:53.886 19:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.456 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:54.456 ... 00:40:54.456 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:54.456 ... 00:40:54.456 fio-3.35 00:40:54.456 Starting 4 threads 00:40:54.456 EAL: No free 2048 kB hugepages reported on node 1 00:41:01.090 00:41:01.090 filename0: (groupid=0, jobs=1): err= 0: pid=3211704: Mon Jul 22 19:45:19 2024 00:41:01.090 read: IOPS=1899, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5002msec) 00:41:01.090 slat (nsec): min=5951, max=84325, avg=8428.37, stdev=3143.82 00:41:01.090 clat (usec): min=1889, max=6969, avg=4188.34, stdev=638.39 00:41:01.090 lat (usec): min=1913, max=6977, avg=4196.77, stdev=638.53 00:41:01.090 clat percentiles (usec): 00:41:01.090 | 1.00th=[ 2999], 5.00th=[ 3556], 10.00th=[ 3720], 20.00th=[ 3884], 00:41:01.090 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:41:01.090 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 5014], 95.00th=[ 5997], 00:41:01.090 | 99.00th=[ 6325], 99.50th=[ 6390], 99.90th=[ 6587], 99.95th=[ 6915], 00:41:01.090 | 99.99th=[ 6980] 00:41:01.090 bw ( KiB/s): min=14576, max=15904, per=25.15%, avg=15267.56, stdev=490.93, samples=9 00:41:01.090 iops : min= 1822, max= 1988, avg=1908.44, stdev=61.37, samples=9 00:41:01.090 lat (msec) : 2=0.04%, 4=39.52%, 10=60.43% 00:41:01.090 cpu : usr=96.66%, sys=3.04%, ctx=10, majf=0, minf=1635 00:41:01.090 IO depths : 1=0.1%, 2=0.4%, 4=69.4%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.090 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.090 issued rwts: total=9503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.090 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:01.090 filename0: (groupid=0, jobs=1): err= 0: pid=3211705: Mon Jul 22 19:45:19 2024 00:41:01.090 read: IOPS=1873, BW=14.6MiB/s (15.3MB/s)(73.2MiB/5002msec) 00:41:01.090 slat (nsec): min=5937, max=52447, avg=8363.40, stdev=3206.45 00:41:01.090 clat (usec): min=1856, max=7704, avg=4247.66, stdev=665.66 00:41:01.090 lat (usec): min=1862, max=7713, avg=4256.03, stdev=665.73 00:41:01.090 clat percentiles (usec): 00:41:01.090 | 1.00th=[ 3261], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:41:01.090 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:41:01.090 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 5604], 95.00th=[ 5997], 00:41:01.090 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[ 7504], 00:41:01.090 | 99.99th=[ 7701] 00:41:01.090 bw ( KiB/s): min=14640, max=15360, per=24.66%, avg=14970.67, stdev=235.01, samples=9 00:41:01.090 iops : min= 1830, max= 1920, avg=1871.33, stdev=29.38, samples=9 00:41:01.090 lat (msec) : 2=0.02%, 4=35.53%, 10=64.45% 00:41:01.090 cpu : usr=96.40%, sys=3.28%, ctx=7, majf=0, minf=1635 00:41:01.090 IO depths : 1=0.1%, 2=0.4%, 4=72.0%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.090 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.090 issued rwts: total=9369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.090 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:01.090 filename1: (groupid=0, jobs=1): err= 0: pid=3211706: Mon Jul 22 19:45:19 2024 00:41:01.090 read: IOPS=1880, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5001msec) 00:41:01.090 slat (nsec): min=5913, max=52830, avg=8568.55, stdev=3161.96 00:41:01.090 clat (usec): min=1573, max=7780, avg=4230.39, stdev=637.15 00:41:01.090 lat (usec): min=1579, max=7786, avg=4238.96, stdev=636.80 00:41:01.090 clat percentiles (usec): 00:41:01.090 | 1.00th=[ 3163], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3884], 00:41:01.090 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:41:01.090 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 5145], 95.00th=[ 5997], 00:41:01.090 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 6783], 99.95th=[ 7308], 00:41:01.090 | 99.99th=[ 7767] 00:41:01.090 bw ( KiB/s): min=14560, max=15776, per=24.71%, avg=14997.00, stdev=436.08, samples=9 00:41:01.090 iops : min= 1820, max= 1972, avg=1874.56, stdev=54.55, samples=9 00:41:01.090 lat (msec) : 2=0.05%, 4=36.54%, 10=63.41% 00:41:01.090 cpu : usr=96.86%, sys=2.82%, ctx=10, majf=0, minf=1640 00:41:01.090 IO depths : 1=0.1%, 2=0.7%, 4=69.5%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.090 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.090 issued rwts: total=9406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.090 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:01.090 filename1: (groupid=0, jobs=1): err= 0: pid=3211707: Mon Jul 22 19:45:19 2024 00:41:01.090 read: IOPS=1933, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5002msec) 00:41:01.090 slat (nsec): min=5936, max=78556, avg=8384.52, stdev=3030.93 00:41:01.090 clat (usec): min=2158, max=7755, avg=4116.50, stdev=455.86 00:41:01.090 lat (usec): min=2166, max=7764, avg=4124.89, stdev=455.85 00:41:01.090 clat percentiles (usec): 00:41:01.090 | 1.00th=[ 3261], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3884], 00:41:01.090 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 4113], 60.00th=[ 4178], 00:41:01.090 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4948], 00:41:01.090 | 99.00th=[ 6128], 99.50th=[ 6259], 99.90th=[ 7046], 99.95th=[ 7308], 00:41:01.090 | 99.99th=[ 7767] 00:41:01.090 bw ( KiB/s): min=14896, max=16080, per=25.49%, avg=15470.22, stdev=389.09, samples=9 00:41:01.090 iops : min= 1862, max= 2010, avg=1933.78, stdev=48.64, samples=9 00:41:01.090 lat (msec) : 4=44.59%, 10=55.41% 00:41:01.090 cpu : usr=96.58%, sys=3.10%, ctx=9, majf=0, minf=1635 00:41:01.090 IO depths : 1=0.1%, 2=0.5%, 4=67.3%, 8=32.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.090 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.090 issued rwts: total=9671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.090 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:01.090 00:41:01.090 Run status group 0 (all jobs): 00:41:01.090 READ: bw=59.3MiB/s (62.1MB/s), 14.6MiB/s-15.1MiB/s (15.3MB/s-15.8MB/s), io=296MiB (311MB), run=5001-5002msec 00:41:01.090 ----------------------------------------------------- 00:41:01.090 Suppressions used: 00:41:01.090 count bytes template 00:41:01.090 6 52 /usr/src/fio/parse.c 00:41:01.090 1 8 libtcmalloc_minimal.so 00:41:01.090 1 904 libcrypto.so 00:41:01.090 ----------------------------------------------------- 00:41:01.090 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.090 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.091 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.091 19:45:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:01.091 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.091 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.091 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.091 00:41:01.091 real 0m27.561s 00:41:01.091 user 5m17.978s 00:41:01.091 sys 0m5.094s 00:41:01.091 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:01.091 19:45:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.091 ************************************ 00:41:01.091 END TEST fio_dif_rand_params 00:41:01.091 ************************************ 00:41:01.091 19:45:20 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:41:01.091 19:45:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:01.091 19:45:20 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:01.091 19:45:20 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:01.091 19:45:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:01.352 ************************************ 00:41:01.352 START TEST fio_dif_digest 00:41:01.352 ************************************ 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:01.352 bdev_null0 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:01.352 [2024-07-22 19:45:20.117988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:01.352 { 00:41:01.352 "params": { 00:41:01.352 "name": "Nvme$subsystem", 00:41:01.352 "trtype": "$TEST_TRANSPORT", 00:41:01.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:01.352 "adrfam": "ipv4", 00:41:01.352 "trsvcid": "$NVMF_PORT", 00:41:01.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:01.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:01.352 "hdgst": ${hdgst:-false}, 00:41:01.352 "ddgst": ${ddgst:-false} 00:41:01.352 }, 00:41:01.352 "method": "bdev_nvme_attach_controller" 00:41:01.352 } 00:41:01.352 EOF 00:41:01.352 )") 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:41:01.352 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:01.353 "params": { 00:41:01.353 "name": "Nvme0", 00:41:01.353 "trtype": "tcp", 00:41:01.353 "traddr": "10.0.0.2", 00:41:01.353 "adrfam": "ipv4", 00:41:01.353 "trsvcid": "4420", 00:41:01.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:01.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:01.353 "hdgst": true, 00:41:01.353 "ddgst": true 00:41:01.353 }, 00:41:01.353 "method": "bdev_nvme_attach_controller" 00:41:01.353 }' 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:01.353 19:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:01.613 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:01.613 ... 00:41:01.613 fio-3.35 00:41:01.613 Starting 3 threads 00:41:01.873 EAL: No free 2048 kB hugepages reported on node 1 00:41:14.100 00:41:14.100 filename0: (groupid=0, jobs=1): err= 0: pid=3213222: Mon Jul 22 19:45:31 2024 00:41:14.100 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(261MiB/10047msec) 00:41:14.100 slat (nsec): min=6440, max=44613, avg=11292.84, stdev=1903.93 00:41:14.100 clat (usec): min=6644, max=57020, avg=14428.02, stdev=4493.14 00:41:14.100 lat (usec): min=6654, max=57030, avg=14439.31, stdev=4493.14 00:41:14.100 clat percentiles (usec): 00:41:14.100 | 1.00th=[ 9503], 5.00th=[10945], 10.00th=[12387], 20.00th=[13173], 00:41:14.100 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:41:14.100 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[15926], 00:41:14.100 | 99.00th=[51643], 99.50th=[55313], 99.90th=[56361], 99.95th=[56886], 00:41:14.100 | 99.99th=[56886] 00:41:14.100 bw ( KiB/s): min=23040, max=29184, per=36.61%, avg=26649.60, stdev=1573.49, samples=20 00:41:14.100 iops : min= 180, max= 228, avg=208.20, stdev=12.29, samples=20 00:41:14.100 lat (msec) : 10=2.11%, 20=96.79%, 50=0.05%, 100=1.06% 00:41:14.100 cpu : usr=95.51%, sys=4.16%, ctx=24, majf=0, minf=1635 00:41:14.100 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.100 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.100 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:14.100 filename0: (groupid=0, jobs=1): err= 0: pid=3213223: Mon Jul 22 19:45:31 2024 00:41:14.100 read: IOPS=186, BW=23.3MiB/s (24.5MB/s)(234MiB/10046msec) 00:41:14.100 slat (nsec): min=6392, max=45841, avg=10150.32, stdev=1763.07 00:41:14.100 clat (usec): min=8155, max=60198, avg=16047.44, stdev=4315.45 00:41:14.100 lat (usec): min=8166, max=60208, avg=16057.59, stdev=4315.45 00:41:14.100 clat percentiles (usec): 00:41:14.100 | 1.00th=[10552], 5.00th=[12256], 10.00th=[13698], 20.00th=[14615], 00:41:14.100 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15795], 60.00th=[16188], 00:41:14.100 | 70.00th=[16581], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:41:14.100 | 99.00th=[20841], 99.50th=[57410], 99.90th=[59507], 99.95th=[60031], 00:41:14.100 | 99.99th=[60031] 00:41:14.100 bw ( KiB/s): min=22528, max=25856, per=32.92%, avg=23961.60, stdev=998.07, samples=20 00:41:14.100 iops : min= 176, max= 202, avg=187.20, stdev= 7.80, samples=20 00:41:14.100 lat (msec) : 10=0.32%, 20=98.51%, 50=0.27%, 100=0.91% 00:41:14.100 cpu : usr=96.22%, sys=3.52%, ctx=16, majf=0, minf=1638 00:41:14.100 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.100 issued rwts: total=1874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.100 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:14.100 filename0: (groupid=0, jobs=1): err= 0: pid=3213224: Mon Jul 22 19:45:31 2024 00:41:14.100 read: IOPS=174, BW=21.9MiB/s (22.9MB/s)(220MiB/10050msec) 00:41:14.100 slat (nsec): min=6458, max=48004, avg=11368.64, stdev=2085.26 00:41:14.100 clat (usec): min=10199, max=60025, avg=17117.92, stdev=5192.18 00:41:14.100 lat (usec): min=10209, max=60035, avg=17129.29, stdev=5192.23 00:41:14.100 clat percentiles (usec): 00:41:14.100 | 1.00th=[11600], 5.00th=[13304], 10.00th=[14615], 20.00th=[15401], 00:41:14.100 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16581], 60.00th=[16909], 00:41:14.100 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[19268], 00:41:14.100 | 99.00th=[56886], 99.50th=[58459], 99.90th=[58983], 99.95th=[60031], 00:41:14.100 | 99.99th=[60031] 00:41:14.100 bw ( KiB/s): min=20736, max=24576, per=30.86%, avg=22464.00, stdev=1030.30, samples=20 00:41:14.100 iops : min= 162, max= 192, avg=175.50, stdev= 8.05, samples=20 00:41:14.100 lat (msec) : 20=97.27%, 50=1.25%, 100=1.48% 00:41:14.100 cpu : usr=95.71%, sys=3.99%, ctx=15, majf=0, minf=1636 00:41:14.100 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.100 issued rwts: total=1757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.100 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:14.100 00:41:14.100 Run status group 0 (all jobs): 00:41:14.100 READ: bw=71.1MiB/s (74.5MB/s), 21.9MiB/s-25.9MiB/s (22.9MB/s-27.2MB/s), io=714MiB (749MB), run=10046-10050msec 00:41:14.100 ----------------------------------------------------- 00:41:14.100 Suppressions used: 00:41:14.100 count bytes template 00:41:14.100 5 44 /usr/src/fio/parse.c 00:41:14.100 1 8 libtcmalloc_minimal.so 00:41:14.100 1 904 libcrypto.so 00:41:14.100 ----------------------------------------------------- 00:41:14.100 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:14.100 19:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:14.100 00:41:14.100 real 0m12.210s 00:41:14.100 user 0m41.074s 00:41:14.100 sys 0m1.688s 00:41:14.101 19:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:14.101 19:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:14.101 ************************************ 00:41:14.101 END TEST fio_dif_digest 00:41:14.101 ************************************ 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:41:14.101 19:45:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:14.101 19:45:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:14.101 rmmod nvme_tcp 00:41:14.101 rmmod nvme_fabrics 00:41:14.101 rmmod nvme_keyring 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3201543 ']' 00:41:14.101 19:45:32 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3201543 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3201543 ']' 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3201543 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3201543 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3201543' 00:41:14.101 killing process with pid 3201543 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3201543 00:41:14.101 19:45:32 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3201543 00:41:14.673 19:45:33 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:41:14.673 19:45:33 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:17.975 Waiting for block devices as requested 00:41:17.975 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:17.975 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:17.975 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:17.975 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:17.975 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:18.236 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:18.236 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:18.236 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:18.497 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:18.497 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:18.757 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:18.757 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:18.758 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:18.758 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:19.018 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:19.018 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:19.018 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:19.279 19:45:38 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:19.279 19:45:38 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:19.279 19:45:38 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:19.279 19:45:38 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:19.279 19:45:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:19.279 19:45:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:19.279 19:45:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:21.828 19:45:40 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:21.828 00:41:21.828 real 1m24.228s 00:41:21.828 user 8m11.279s 00:41:21.828 sys 0m21.108s 00:41:21.828 19:45:40 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:21.828 19:45:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:21.828 ************************************ 00:41:21.828 END TEST nvmf_dif 00:41:21.828 ************************************ 00:41:21.828 19:45:40 -- common/autotest_common.sh@1142 -- # return 0 00:41:21.828 19:45:40 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:21.828 19:45:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:21.828 19:45:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:21.828 19:45:40 -- common/autotest_common.sh@10 -- # set +x 00:41:21.828 ************************************ 00:41:21.828 START TEST nvmf_abort_qd_sizes 00:41:21.828 ************************************ 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:21.828 * Looking for test storage... 00:41:21.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:41:21.828 19:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:28.419 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:28.420 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:28.420 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:28.420 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:28.420 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:28.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:28.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:41:28.420 00:41:28.420 --- 10.0.0.2 ping statistics --- 00:41:28.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.420 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:28.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:28.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:41:28.420 00:41:28.420 --- 10.0.0.1 ping statistics --- 00:41:28.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.420 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:41:28.420 19:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:31.718 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:31.718 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:31.718 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:31.718 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:31.718 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:31.979 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:32.240 19:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:32.501 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3222751 00:41:32.501 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3222751 00:41:32.501 19:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:32.501 19:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3222751 ']' 00:41:32.501 19:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:32.501 19:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:32.501 19:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:32.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:32.501 19:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:32.501 19:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:32.501 [2024-07-22 19:45:51.289301] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:32.501 [2024-07-22 19:45:51.289427] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:32.501 EAL: No free 2048 kB hugepages reported on node 1 00:41:32.501 [2024-07-22 19:45:51.425026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:32.761 [2024-07-22 19:45:51.608988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:32.761 [2024-07-22 19:45:51.609032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:32.761 [2024-07-22 19:45:51.609045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:32.761 [2024-07-22 19:45:51.609055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:32.761 [2024-07-22 19:45:51.609064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:32.761 [2024-07-22 19:45:51.609285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:32.761 [2024-07-22 19:45:51.609445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:32.761 [2024-07-22 19:45:51.609298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:32.761 [2024-07-22 19:45:51.609471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:33.377 19:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:33.377 ************************************ 00:41:33.377 START TEST spdk_target_abort 00:41:33.377 ************************************ 00:41:33.377 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:41:33.377 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:33.377 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:41:33.377 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:33.377 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:33.637 spdk_targetn1 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:33.638 [2024-07-22 19:45:52.470881] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:33.638 [2024-07-22 19:45:52.511418] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:33.638 19:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:33.897 EAL: No free 2048 kB hugepages reported on node 1 00:41:33.897 [2024-07-22 19:45:52.693877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:488 len:8 PRP1 0x2000078bf000 PRP2 0x0 00:41:33.897 [2024-07-22 19:45:52.693913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:003e p:1 m:0 dnr:0 00:41:33.897 [2024-07-22 19:45:52.701811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:720 len:8 PRP1 0x2000078bf000 PRP2 0x0 00:41:33.897 [2024-07-22 19:45:52.701834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:005c p:1 m:0 dnr:0 00:41:33.897 [2024-07-22 19:45:52.725755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1456 len:8 PRP1 0x2000078c3000 PRP2 0x0 00:41:33.897 [2024-07-22 19:45:52.725778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00b8 p:1 m:0 dnr:0 00:41:33.897 [2024-07-22 19:45:52.742413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2000 len:8 PRP1 0x2000078c3000 PRP2 0x0 00:41:33.897 [2024-07-22 19:45:52.742435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00fb p:1 m:0 dnr:0 00:41:33.897 [2024-07-22 19:45:52.745443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2152 len:8 PRP1 0x2000078c3000 PRP2 0x0 00:41:33.897 [2024-07-22 19:45:52.745463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:33.897 [2024-07-22 19:45:52.788745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3416 len:8 PRP1 0x2000078bf000 PRP2 0x0 00:41:33.897 [2024-07-22 19:45:52.788767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00ac p:0 m:0 dnr:0 00:41:33.897 [2024-07-22 19:45:52.790381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3512 len:8 PRP1 0x2000078c5000 PRP2 0x0 00:41:33.897 [2024-07-22 19:45:52.790401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00b9 p:0 m:0 dnr:0 00:41:37.203 Initializing NVMe Controllers 00:41:37.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:37.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:37.203 Initialization complete. Launching workers. 00:41:37.203 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10248, failed: 7 00:41:37.203 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2848, failed to submit 7407 00:41:37.203 success 689, unsuccess 2159, failed 0 00:41:37.203 19:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:37.203 19:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:37.203 EAL: No free 2048 kB hugepages reported on node 1 00:41:37.203 [2024-07-22 19:45:55.923614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200007c4f000 PRP2 0x0 00:41:37.203 [2024-07-22 19:45:55.923660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:41:37.203 [2024-07-22 19:45:56.002125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:2024 len:8 PRP1 0x200007c3d000 PRP2 0x0 00:41:37.203 [2024-07-22 19:45:56.002162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:41:37.203 [2024-07-22 19:45:56.029216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:2616 len:8 PRP1 0x200007c5b000 PRP2 0x0 00:41:37.203 [2024-07-22 19:45:56.029247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:37.203 [2024-07-22 19:45:56.069410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:3600 len:8 PRP1 0x200007c55000 PRP2 0x0 00:41:37.203 [2024-07-22 19:45:56.069447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00c7 p:0 m:0 dnr:0 00:41:40.502 Initializing NVMe Controllers 00:41:40.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:40.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:40.502 Initialization complete. Launching workers. 00:41:40.502 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8557, failed: 4 00:41:40.502 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7325 00:41:40.502 success 318, unsuccess 918, failed 0 00:41:40.502 19:45:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:40.502 19:45:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:40.502 EAL: No free 2048 kB hugepages reported on node 1 00:41:40.502 [2024-07-22 19:45:59.381849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:720 len:8 PRP1 0x200007913000 PRP2 0x0 00:41:40.502 [2024-07-22 19:45:59.381887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:006c p:1 m:0 dnr:0 00:41:41.901 [2024-07-22 19:46:00.546770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:166 nsid:1 lba:118904 len:8 PRP1 0x2000078f7000 PRP2 0x0 00:41:41.901 [2024-07-22 19:46:00.546815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:166 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:41:43.812 Initializing NVMe Controllers 00:41:43.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:43.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:43.812 Initialization complete. Launching workers. 00:41:43.812 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38043, failed: 2 00:41:43.812 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2563, failed to submit 35482 00:41:43.812 success 589, unsuccess 1974, failed 0 00:41:43.812 19:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:43.812 19:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:43.812 19:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:43.812 19:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:43.812 19:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:43.812 19:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:43.812 19:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3222751 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3222751 ']' 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3222751 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3222751 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3222751' 00:41:45.724 killing process with pid 3222751 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3222751 00:41:45.724 19:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3222751 00:41:46.307 00:41:46.307 real 0m12.975s 00:41:46.307 user 0m51.201s 00:41:46.307 sys 0m2.086s 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:46.307 ************************************ 00:41:46.307 END TEST spdk_target_abort 00:41:46.307 ************************************ 00:41:46.307 19:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:41:46.307 19:46:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:46.307 19:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:46.307 19:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:46.307 19:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:46.307 ************************************ 00:41:46.307 START TEST kernel_target_abort 00:41:46.307 ************************************ 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:46.307 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:46.308 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:41:46.308 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:46.308 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:46.308 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:46.308 19:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:49.604 Waiting for block devices as requested 00:41:49.604 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:49.604 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:49.604 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:49.604 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:49.604 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:49.604 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:49.864 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:49.864 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:49.864 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:50.124 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:50.124 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:50.124 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:50.385 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:50.385 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:50.385 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:50.385 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:50.646 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:51.591 No valid GPT data, bailing 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:51.591 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:41:51.852 00:41:51.852 Discovery Log Number of Records 2, Generation counter 2 00:41:51.852 =====Discovery Log Entry 0====== 00:41:51.852 trtype: tcp 00:41:51.852 adrfam: ipv4 00:41:51.852 subtype: current discovery subsystem 00:41:51.852 treq: not specified, sq flow control disable supported 00:41:51.852 portid: 1 00:41:51.852 trsvcid: 4420 00:41:51.852 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:51.852 traddr: 10.0.0.1 00:41:51.852 eflags: none 00:41:51.852 sectype: none 00:41:51.852 =====Discovery Log Entry 1====== 00:41:51.852 trtype: tcp 00:41:51.852 adrfam: ipv4 00:41:51.852 subtype: nvme subsystem 00:41:51.852 treq: not specified, sq flow control disable supported 00:41:51.852 portid: 1 00:41:51.852 trsvcid: 4420 00:41:51.852 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:51.852 traddr: 10.0.0.1 00:41:51.852 eflags: none 00:41:51.852 sectype: none 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:51.852 19:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:51.852 EAL: No free 2048 kB hugepages reported on node 1 00:41:55.172 Initializing NVMe Controllers 00:41:55.172 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:55.172 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:55.172 Initialization complete. Launching workers. 00:41:55.172 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 50366, failed: 0 00:41:55.172 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 50366, failed to submit 0 00:41:55.172 success 0, unsuccess 50366, failed 0 00:41:55.172 19:46:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:55.172 19:46:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:55.172 EAL: No free 2048 kB hugepages reported on node 1 00:41:58.471 Initializing NVMe Controllers 00:41:58.471 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:58.471 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:58.471 Initialization complete. Launching workers. 00:41:58.471 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88171, failed: 0 00:41:58.471 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22230, failed to submit 65941 00:41:58.471 success 0, unsuccess 22230, failed 0 00:41:58.471 19:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:58.471 19:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:58.471 EAL: No free 2048 kB hugepages reported on node 1 00:42:01.064 Initializing NVMe Controllers 00:42:01.064 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:01.064 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:01.064 Initialization complete. Launching workers. 00:42:01.064 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84777, failed: 0 00:42:01.064 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21190, failed to submit 63587 00:42:01.064 success 0, unsuccess 21190, failed 0 00:42:01.064 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:01.064 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:01.064 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:42:01.324 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:01.324 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:01.324 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:01.324 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:01.324 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:42:01.324 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:42:01.324 19:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:04.627 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:04.627 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:04.888 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:04.888 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:04.888 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:04.888 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:06.801 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:06.801 00:42:06.801 real 0m20.514s 00:42:06.801 user 0m8.990s 00:42:06.801 sys 0m6.631s 00:42:06.801 19:46:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:06.801 19:46:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:06.801 ************************************ 00:42:06.801 END TEST kernel_target_abort 00:42:06.801 ************************************ 00:42:06.801 19:46:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:42:06.801 19:46:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:06.802 19:46:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:06.802 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:06.802 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:42:06.802 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:06.802 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:42:06.802 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:06.802 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:06.802 rmmod nvme_tcp 00:42:07.062 rmmod nvme_fabrics 00:42:07.062 rmmod nvme_keyring 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3222751 ']' 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3222751 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3222751 ']' 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3222751 00:42:07.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3222751) - No such process 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3222751 is not found' 00:42:07.062 Process with pid 3222751 is not found 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:42:07.062 19:46:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:10.364 Waiting for block devices as requested 00:42:10.364 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:10.364 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:10.364 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:10.364 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:10.364 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:10.364 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:10.364 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:10.364 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:10.364 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:10.624 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:10.624 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:10.884 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:10.884 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:10.885 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:10.885 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:11.146 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:11.146 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:11.406 19:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:11.406 19:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:11.406 19:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:11.406 19:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:11.406 19:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:11.406 19:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:11.406 19:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:13.952 19:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:13.952 00:42:13.952 real 0m51.985s 00:42:13.952 user 1m5.168s 00:42:13.952 sys 0m18.840s 00:42:13.952 19:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:13.952 19:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:13.952 ************************************ 00:42:13.952 END TEST nvmf_abort_qd_sizes 00:42:13.952 ************************************ 00:42:13.952 19:46:32 -- common/autotest_common.sh@1142 -- # return 0 00:42:13.953 19:46:32 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:13.953 19:46:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:13.953 19:46:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:13.953 19:46:32 -- common/autotest_common.sh@10 -- # set +x 00:42:13.953 ************************************ 00:42:13.953 START TEST keyring_file 00:42:13.953 ************************************ 00:42:13.953 19:46:32 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:13.953 * Looking for test storage... 00:42:13.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:13.953 19:46:32 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:13.953 19:46:32 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:13.953 19:46:32 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:13.953 19:46:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.953 19:46:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.953 19:46:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.953 19:46:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:13.953 19:46:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@47 -- # : 0 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4SJoI4pFeV 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4SJoI4pFeV 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4SJoI4pFeV 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.4SJoI4pFeV 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jwgXf162w0 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:42:13.953 19:46:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jwgXf162w0 00:42:13.953 19:46:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jwgXf162w0 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jwgXf162w0 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=3233073 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3233073 00:42:13.953 19:46:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:13.953 19:46:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3233073 ']' 00:42:13.953 19:46:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:13.953 19:46:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:13.953 19:46:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:13.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:13.953 19:46:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:13.953 19:46:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:13.953 [2024-07-22 19:46:32.702467] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:13.953 [2024-07-22 19:46:32.702588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3233073 ] 00:42:13.953 EAL: No free 2048 kB hugepages reported on node 1 00:42:13.953 [2024-07-22 19:46:32.831481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:14.214 [2024-07-22 19:46:33.009795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:14.792 19:46:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:14.792 19:46:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:42:14.792 19:46:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:14.792 19:46:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.792 19:46:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:14.792 [2024-07-22 19:46:33.599246] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:14.792 null0 00:42:14.792 [2024-07-22 19:46:33.631282] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:14.792 [2024-07-22 19:46:33.631687] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:14.792 [2024-07-22 19:46:33.639299] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:42:14.792 19:46:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.792 19:46:33 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:14.793 [2024-07-22 19:46:33.655342] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:14.793 request: 00:42:14.793 { 00:42:14.793 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:14.793 "secure_channel": false, 00:42:14.793 "listen_address": { 00:42:14.793 "trtype": "tcp", 00:42:14.793 "traddr": "127.0.0.1", 00:42:14.793 "trsvcid": "4420" 00:42:14.793 }, 00:42:14.793 "method": "nvmf_subsystem_add_listener", 00:42:14.793 "req_id": 1 00:42:14.793 } 00:42:14.793 Got JSON-RPC error response 00:42:14.793 response: 00:42:14.793 { 00:42:14.793 "code": -32602, 00:42:14.793 "message": "Invalid parameters" 00:42:14.793 } 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:14.793 19:46:33 keyring_file -- keyring/file.sh@46 -- # bperfpid=3233404 00:42:14.793 19:46:33 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3233404 /var/tmp/bperf.sock 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3233404 ']' 00:42:14.793 19:46:33 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:14.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:14.793 19:46:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:14.793 [2024-07-22 19:46:33.739437] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:14.793 [2024-07-22 19:46:33.739568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3233404 ] 00:42:15.053 EAL: No free 2048 kB hugepages reported on node 1 00:42:15.053 [2024-07-22 19:46:33.865920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.313 [2024-07-22 19:46:34.040834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:15.574 19:46:34 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:15.574 19:46:34 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:42:15.574 19:46:34 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4SJoI4pFeV 00:42:15.574 19:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4SJoI4pFeV 00:42:15.835 19:46:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jwgXf162w0 00:42:15.835 19:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jwgXf162w0 00:42:16.096 19:46:34 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:42:16.096 19:46:34 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:42:16.096 19:46:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.096 19:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.096 19:46:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.096 19:46:34 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.4SJoI4pFeV == \/\t\m\p\/\t\m\p\.\4\S\J\o\I\4\p\F\e\V ]] 00:42:16.096 19:46:34 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:42:16.096 19:46:34 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:16.096 19:46:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.096 19:46:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:16.096 19:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.357 19:46:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.jwgXf162w0 == \/\t\m\p\/\t\m\p\.\j\w\g\X\f\1\6\2\w\0 ]] 00:42:16.357 19:46:35 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.357 19:46:35 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:42:16.357 19:46:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:16.357 19:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.618 19:46:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:16.618 19:46:35 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:16.618 19:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:16.618 [2024-07-22 19:46:35.554013] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:16.879 nvme0n1 00:42:16.879 19:46:35 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.879 19:46:35 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:42:16.879 19:46:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.879 19:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:17.140 19:46:35 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:42:17.140 19:46:35 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:17.140 Running I/O for 1 seconds... 00:42:18.525 00:42:18.525 Latency(us) 00:42:18.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:18.525 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:18.525 nvme0n1 : 1.01 9225.24 36.04 0.00 0.00 13804.07 4505.60 20534.61 00:42:18.525 =================================================================================================================== 00:42:18.525 Total : 9225.24 36.04 0.00 0.00 13804.07 4505.60 20534.61 00:42:18.525 0 00:42:18.525 19:46:37 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:18.525 19:46:37 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.525 19:46:37 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:42:18.525 19:46:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.525 19:46:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:18.786 19:46:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:18.786 19:46:37 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:18.786 19:46:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:18.786 19:46:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:18.786 19:46:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:18.786 19:46:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:18.786 19:46:37 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:18.786 19:46:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:18.786 19:46:37 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:18.786 19:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:18.786 [2024-07-22 19:46:37.730600] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:18.786 [2024-07-22 19:46:37.730930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d680 (107): Transport endpoint is not connected 00:42:18.786 [2024-07-22 19:46:37.731917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d680 (9): Bad file descriptor 00:42:18.786 [2024-07-22 19:46:37.732914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:18.786 [2024-07-22 19:46:37.732933] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:18.786 [2024-07-22 19:46:37.732942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:18.786 request: 00:42:18.786 { 00:42:18.786 "name": "nvme0", 00:42:18.786 "trtype": "tcp", 00:42:18.786 "traddr": "127.0.0.1", 00:42:18.786 "adrfam": "ipv4", 00:42:18.786 "trsvcid": "4420", 00:42:18.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:18.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:18.786 "prchk_reftag": false, 00:42:18.786 "prchk_guard": false, 00:42:18.786 "hdgst": false, 00:42:18.786 "ddgst": false, 00:42:18.786 "psk": "key1", 00:42:18.786 "method": "bdev_nvme_attach_controller", 00:42:18.786 "req_id": 1 00:42:18.786 } 00:42:18.786 Got JSON-RPC error response 00:42:18.786 response: 00:42:18.786 { 00:42:18.786 "code": -5, 00:42:18.786 "message": "Input/output error" 00:42:18.786 } 00:42:19.048 19:46:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:19.048 19:46:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:19.048 19:46:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:19.048 19:46:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:19.048 19:46:37 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.048 19:46:37 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:42:19.048 19:46:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.048 19:46:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:19.309 19:46:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:19.309 19:46:38 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:42:19.309 19:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:19.309 19:46:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:42:19.309 19:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:19.570 19:46:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:42:19.570 19:46:38 keyring_file -- keyring/file.sh@77 -- # jq length 00:42:19.570 19:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.570 19:46:38 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:42:19.570 19:46:38 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.4SJoI4pFeV 00:42:19.570 19:46:38 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.4SJoI4pFeV 00:42:19.570 19:46:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:19.570 19:46:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.4SJoI4pFeV 00:42:19.570 19:46:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:19.570 19:46:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:19.570 19:46:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:19.570 19:46:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:19.570 19:46:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4SJoI4pFeV 00:42:19.570 19:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4SJoI4pFeV 00:42:19.831 [2024-07-22 19:46:38.661215] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4SJoI4pFeV': 0100660 00:42:19.831 [2024-07-22 19:46:38.661245] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:19.831 request: 00:42:19.831 { 00:42:19.831 "name": "key0", 00:42:19.831 "path": "/tmp/tmp.4SJoI4pFeV", 00:42:19.831 "method": "keyring_file_add_key", 00:42:19.831 "req_id": 1 00:42:19.831 } 00:42:19.831 Got JSON-RPC error response 00:42:19.831 response: 00:42:19.831 { 00:42:19.831 "code": -1, 00:42:19.831 "message": "Operation not permitted" 00:42:19.831 } 00:42:19.831 19:46:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:19.831 19:46:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:19.831 19:46:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:19.831 19:46:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:19.831 19:46:38 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.4SJoI4pFeV 00:42:19.831 19:46:38 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4SJoI4pFeV 00:42:19.831 19:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4SJoI4pFeV 00:42:20.093 19:46:38 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.4SJoI4pFeV 00:42:20.093 19:46:38 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:42:20.093 19:46:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:20.093 19:46:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.093 19:46:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.093 19:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.093 19:46:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:20.093 19:46:38 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:42:20.093 19:46:38 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.093 19:46:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:20.093 19:46:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.093 19:46:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:20.093 19:46:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:20.093 19:46:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:20.093 19:46:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:20.093 19:46:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.093 19:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.354 [2024-07-22 19:46:39.142477] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.4SJoI4pFeV': No such file or directory 00:42:20.355 [2024-07-22 19:46:39.142504] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:20.355 [2024-07-22 19:46:39.142528] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:20.355 [2024-07-22 19:46:39.142536] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:20.355 [2024-07-22 19:46:39.142544] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:20.355 request: 00:42:20.355 { 00:42:20.355 "name": "nvme0", 00:42:20.355 "trtype": "tcp", 00:42:20.355 "traddr": "127.0.0.1", 00:42:20.355 "adrfam": "ipv4", 00:42:20.355 "trsvcid": "4420", 00:42:20.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:20.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:20.355 "prchk_reftag": false, 00:42:20.355 "prchk_guard": false, 00:42:20.355 "hdgst": false, 00:42:20.355 "ddgst": false, 00:42:20.355 "psk": "key0", 00:42:20.355 "method": "bdev_nvme_attach_controller", 00:42:20.355 "req_id": 1 00:42:20.355 } 00:42:20.355 Got JSON-RPC error response 00:42:20.355 response: 00:42:20.355 { 00:42:20.355 "code": -19, 00:42:20.355 "message": "No such device" 00:42:20.355 } 00:42:20.355 19:46:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:20.355 19:46:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:20.355 19:46:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:20.355 19:46:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:20.355 19:46:39 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:42:20.355 19:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:20.615 19:46:39 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IW6ve6Dc0u 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:20.615 19:46:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:20.615 19:46:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:42:20.615 19:46:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:20.615 19:46:39 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:20.615 19:46:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:42:20.615 19:46:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IW6ve6Dc0u 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IW6ve6Dc0u 00:42:20.615 19:46:39 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.IW6ve6Dc0u 00:42:20.615 19:46:39 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IW6ve6Dc0u 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IW6ve6Dc0u 00:42:20.615 19:46:39 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.615 19:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:20.884 nvme0n1 00:42:20.884 19:46:39 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:42:20.884 19:46:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:20.884 19:46:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.884 19:46:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.884 19:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.884 19:46:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:21.191 19:46:39 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:42:21.191 19:46:39 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:42:21.191 19:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:21.191 19:46:40 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:42:21.191 19:46:40 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:42:21.191 19:46:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:21.191 19:46:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:21.191 19:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.452 19:46:40 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:42:21.452 19:46:40 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:42:21.452 19:46:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:21.452 19:46:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:21.452 19:46:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:21.452 19:46:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:21.452 19:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.713 19:46:40 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:42:21.713 19:46:40 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:21.713 19:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:21.713 19:46:40 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:42:21.713 19:46:40 keyring_file -- keyring/file.sh@104 -- # jq length 00:42:21.713 19:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.974 19:46:40 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:42:21.974 19:46:40 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IW6ve6Dc0u 00:42:21.974 19:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IW6ve6Dc0u 00:42:21.974 19:46:40 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jwgXf162w0 00:42:21.974 19:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jwgXf162w0 00:42:22.234 19:46:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:22.234 19:46:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:22.494 nvme0n1 00:42:22.494 19:46:41 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:42:22.494 19:46:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:22.756 19:46:41 keyring_file -- keyring/file.sh@112 -- # config='{ 00:42:22.756 "subsystems": [ 00:42:22.756 { 00:42:22.756 "subsystem": "keyring", 00:42:22.756 "config": [ 00:42:22.756 { 00:42:22.756 "method": "keyring_file_add_key", 00:42:22.756 "params": { 00:42:22.756 "name": "key0", 00:42:22.756 "path": "/tmp/tmp.IW6ve6Dc0u" 00:42:22.756 } 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "method": "keyring_file_add_key", 00:42:22.756 "params": { 00:42:22.756 "name": "key1", 00:42:22.756 "path": "/tmp/tmp.jwgXf162w0" 00:42:22.756 } 00:42:22.756 } 00:42:22.756 ] 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "subsystem": "iobuf", 00:42:22.756 "config": [ 00:42:22.756 { 00:42:22.756 "method": "iobuf_set_options", 00:42:22.756 "params": { 00:42:22.756 "small_pool_count": 8192, 00:42:22.756 "large_pool_count": 1024, 00:42:22.756 "small_bufsize": 8192, 00:42:22.756 "large_bufsize": 135168 00:42:22.756 } 00:42:22.756 } 00:42:22.756 ] 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "subsystem": "sock", 00:42:22.756 "config": [ 00:42:22.756 { 00:42:22.756 "method": "sock_set_default_impl", 00:42:22.756 "params": { 00:42:22.756 "impl_name": "posix" 00:42:22.756 } 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "method": "sock_impl_set_options", 00:42:22.756 "params": { 00:42:22.756 "impl_name": "ssl", 00:42:22.756 "recv_buf_size": 4096, 00:42:22.756 "send_buf_size": 4096, 00:42:22.756 "enable_recv_pipe": true, 00:42:22.756 "enable_quickack": false, 00:42:22.756 "enable_placement_id": 0, 00:42:22.756 "enable_zerocopy_send_server": true, 00:42:22.756 "enable_zerocopy_send_client": false, 00:42:22.756 "zerocopy_threshold": 0, 00:42:22.756 "tls_version": 0, 00:42:22.756 "enable_ktls": false 00:42:22.756 } 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "method": "sock_impl_set_options", 00:42:22.756 "params": { 00:42:22.756 "impl_name": "posix", 00:42:22.756 "recv_buf_size": 2097152, 00:42:22.756 "send_buf_size": 2097152, 00:42:22.756 "enable_recv_pipe": true, 00:42:22.756 "enable_quickack": false, 00:42:22.756 "enable_placement_id": 0, 00:42:22.756 "enable_zerocopy_send_server": true, 00:42:22.756 "enable_zerocopy_send_client": false, 00:42:22.756 "zerocopy_threshold": 0, 00:42:22.756 "tls_version": 0, 00:42:22.756 "enable_ktls": false 00:42:22.756 } 00:42:22.756 } 00:42:22.756 ] 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "subsystem": "vmd", 00:42:22.756 "config": [] 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "subsystem": "accel", 00:42:22.756 "config": [ 00:42:22.756 { 00:42:22.756 "method": "accel_set_options", 00:42:22.756 "params": { 00:42:22.756 "small_cache_size": 128, 00:42:22.756 "large_cache_size": 16, 00:42:22.756 "task_count": 2048, 00:42:22.756 "sequence_count": 2048, 00:42:22.756 "buf_count": 2048 00:42:22.756 } 00:42:22.756 } 00:42:22.756 ] 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "subsystem": "bdev", 00:42:22.756 "config": [ 00:42:22.756 { 00:42:22.756 "method": "bdev_set_options", 00:42:22.756 "params": { 00:42:22.756 "bdev_io_pool_size": 65535, 00:42:22.756 "bdev_io_cache_size": 256, 00:42:22.756 "bdev_auto_examine": true, 00:42:22.756 "iobuf_small_cache_size": 128, 00:42:22.756 "iobuf_large_cache_size": 16 00:42:22.756 } 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "method": "bdev_raid_set_options", 00:42:22.756 "params": { 00:42:22.756 "process_window_size_kb": 1024, 00:42:22.756 "process_max_bandwidth_mb_sec": 0 00:42:22.756 } 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "method": "bdev_iscsi_set_options", 00:42:22.756 "params": { 00:42:22.756 "timeout_sec": 30 00:42:22.756 } 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "method": "bdev_nvme_set_options", 00:42:22.756 "params": { 00:42:22.756 "action_on_timeout": "none", 00:42:22.756 "timeout_us": 0, 00:42:22.756 "timeout_admin_us": 0, 00:42:22.756 "keep_alive_timeout_ms": 10000, 00:42:22.756 "arbitration_burst": 0, 00:42:22.756 "low_priority_weight": 0, 00:42:22.756 "medium_priority_weight": 0, 00:42:22.756 "high_priority_weight": 0, 00:42:22.756 "nvme_adminq_poll_period_us": 10000, 00:42:22.756 "nvme_ioq_poll_period_us": 0, 00:42:22.756 "io_queue_requests": 512, 00:42:22.756 "delay_cmd_submit": true, 00:42:22.756 "transport_retry_count": 4, 00:42:22.756 "bdev_retry_count": 3, 00:42:22.756 "transport_ack_timeout": 0, 00:42:22.756 "ctrlr_loss_timeout_sec": 0, 00:42:22.756 "reconnect_delay_sec": 0, 00:42:22.756 "fast_io_fail_timeout_sec": 0, 00:42:22.756 "disable_auto_failback": false, 00:42:22.756 "generate_uuids": false, 00:42:22.756 "transport_tos": 0, 00:42:22.756 "nvme_error_stat": false, 00:42:22.756 "rdma_srq_size": 0, 00:42:22.756 "io_path_stat": false, 00:42:22.756 "allow_accel_sequence": false, 00:42:22.756 "rdma_max_cq_size": 0, 00:42:22.756 "rdma_cm_event_timeout_ms": 0, 00:42:22.756 "dhchap_digests": [ 00:42:22.756 "sha256", 00:42:22.756 "sha384", 00:42:22.756 "sha512" 00:42:22.756 ], 00:42:22.756 "dhchap_dhgroups": [ 00:42:22.756 "null", 00:42:22.756 "ffdhe2048", 00:42:22.756 "ffdhe3072", 00:42:22.756 "ffdhe4096", 00:42:22.756 "ffdhe6144", 00:42:22.756 "ffdhe8192" 00:42:22.756 ] 00:42:22.756 } 00:42:22.756 }, 00:42:22.756 { 00:42:22.756 "method": "bdev_nvme_attach_controller", 00:42:22.756 "params": { 00:42:22.756 "name": "nvme0", 00:42:22.756 "trtype": "TCP", 00:42:22.756 "adrfam": "IPv4", 00:42:22.756 "traddr": "127.0.0.1", 00:42:22.756 "trsvcid": "4420", 00:42:22.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:22.756 "prchk_reftag": false, 00:42:22.756 "prchk_guard": false, 00:42:22.756 "ctrlr_loss_timeout_sec": 0, 00:42:22.756 "reconnect_delay_sec": 0, 00:42:22.756 "fast_io_fail_timeout_sec": 0, 00:42:22.756 "psk": "key0", 00:42:22.756 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:22.756 "hdgst": false, 00:42:22.756 "ddgst": false 00:42:22.756 } 00:42:22.756 }, 00:42:22.757 { 00:42:22.757 "method": "bdev_nvme_set_hotplug", 00:42:22.757 "params": { 00:42:22.757 "period_us": 100000, 00:42:22.757 "enable": false 00:42:22.757 } 00:42:22.757 }, 00:42:22.757 { 00:42:22.757 "method": "bdev_wait_for_examine" 00:42:22.757 } 00:42:22.757 ] 00:42:22.757 }, 00:42:22.757 { 00:42:22.757 "subsystem": "nbd", 00:42:22.757 "config": [] 00:42:22.757 } 00:42:22.757 ] 00:42:22.757 }' 00:42:22.757 19:46:41 keyring_file -- keyring/file.sh@114 -- # killprocess 3233404 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3233404 ']' 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3233404 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3233404 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3233404' 00:42:22.757 killing process with pid 3233404 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@967 -- # kill 3233404 00:42:22.757 Received shutdown signal, test time was about 1.000000 seconds 00:42:22.757 00:42:22.757 Latency(us) 00:42:22.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:22.757 =================================================================================================================== 00:42:22.757 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:22.757 19:46:41 keyring_file -- common/autotest_common.sh@972 -- # wait 3233404 00:42:23.093 19:46:42 keyring_file -- keyring/file.sh@117 -- # bperfpid=3234943 00:42:23.093 19:46:42 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3234943 /var/tmp/bperf.sock 00:42:23.093 19:46:42 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3234943 ']' 00:42:23.093 19:46:42 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:23.355 19:46:42 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:23.355 19:46:42 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:23.355 19:46:42 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:23.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:23.355 19:46:42 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:23.355 19:46:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:23.355 19:46:42 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:42:23.355 "subsystems": [ 00:42:23.355 { 00:42:23.355 "subsystem": "keyring", 00:42:23.355 "config": [ 00:42:23.355 { 00:42:23.355 "method": "keyring_file_add_key", 00:42:23.355 "params": { 00:42:23.355 "name": "key0", 00:42:23.355 "path": "/tmp/tmp.IW6ve6Dc0u" 00:42:23.355 } 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "method": "keyring_file_add_key", 00:42:23.355 "params": { 00:42:23.355 "name": "key1", 00:42:23.355 "path": "/tmp/tmp.jwgXf162w0" 00:42:23.355 } 00:42:23.355 } 00:42:23.355 ] 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "subsystem": "iobuf", 00:42:23.355 "config": [ 00:42:23.355 { 00:42:23.355 "method": "iobuf_set_options", 00:42:23.355 "params": { 00:42:23.355 "small_pool_count": 8192, 00:42:23.355 "large_pool_count": 1024, 00:42:23.355 "small_bufsize": 8192, 00:42:23.355 "large_bufsize": 135168 00:42:23.355 } 00:42:23.355 } 00:42:23.355 ] 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "subsystem": "sock", 00:42:23.355 "config": [ 00:42:23.355 { 00:42:23.355 "method": "sock_set_default_impl", 00:42:23.355 "params": { 00:42:23.355 "impl_name": "posix" 00:42:23.355 } 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "method": "sock_impl_set_options", 00:42:23.355 "params": { 00:42:23.355 "impl_name": "ssl", 00:42:23.355 "recv_buf_size": 4096, 00:42:23.355 "send_buf_size": 4096, 00:42:23.355 "enable_recv_pipe": true, 00:42:23.355 "enable_quickack": false, 00:42:23.355 "enable_placement_id": 0, 00:42:23.355 "enable_zerocopy_send_server": true, 00:42:23.355 "enable_zerocopy_send_client": false, 00:42:23.355 "zerocopy_threshold": 0, 00:42:23.355 "tls_version": 0, 00:42:23.355 "enable_ktls": false 00:42:23.355 } 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "method": "sock_impl_set_options", 00:42:23.355 "params": { 00:42:23.355 "impl_name": "posix", 00:42:23.355 "recv_buf_size": 2097152, 00:42:23.355 "send_buf_size": 2097152, 00:42:23.355 "enable_recv_pipe": true, 00:42:23.355 "enable_quickack": false, 00:42:23.355 "enable_placement_id": 0, 00:42:23.355 "enable_zerocopy_send_server": true, 00:42:23.355 "enable_zerocopy_send_client": false, 00:42:23.355 "zerocopy_threshold": 0, 00:42:23.355 "tls_version": 0, 00:42:23.355 "enable_ktls": false 00:42:23.355 } 00:42:23.355 } 00:42:23.355 ] 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "subsystem": "vmd", 00:42:23.355 "config": [] 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "subsystem": "accel", 00:42:23.355 "config": [ 00:42:23.355 { 00:42:23.355 "method": "accel_set_options", 00:42:23.355 "params": { 00:42:23.355 "small_cache_size": 128, 00:42:23.355 "large_cache_size": 16, 00:42:23.355 "task_count": 2048, 00:42:23.355 "sequence_count": 2048, 00:42:23.355 "buf_count": 2048 00:42:23.355 } 00:42:23.355 } 00:42:23.355 ] 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "subsystem": "bdev", 00:42:23.355 "config": [ 00:42:23.355 { 00:42:23.355 "method": "bdev_set_options", 00:42:23.355 "params": { 00:42:23.355 "bdev_io_pool_size": 65535, 00:42:23.355 "bdev_io_cache_size": 256, 00:42:23.355 "bdev_auto_examine": true, 00:42:23.355 "iobuf_small_cache_size": 128, 00:42:23.355 "iobuf_large_cache_size": 16 00:42:23.355 } 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "method": "bdev_raid_set_options", 00:42:23.355 "params": { 00:42:23.355 "process_window_size_kb": 1024, 00:42:23.355 "process_max_bandwidth_mb_sec": 0 00:42:23.355 } 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "method": "bdev_iscsi_set_options", 00:42:23.355 "params": { 00:42:23.355 "timeout_sec": 30 00:42:23.355 } 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "method": "bdev_nvme_set_options", 00:42:23.355 "params": { 00:42:23.355 "action_on_timeout": "none", 00:42:23.355 "timeout_us": 0, 00:42:23.355 "timeout_admin_us": 0, 00:42:23.355 "keep_alive_timeout_ms": 10000, 00:42:23.355 "arbitration_burst": 0, 00:42:23.355 "low_priority_weight": 0, 00:42:23.355 "medium_priority_weight": 0, 00:42:23.355 "high_priority_weight": 0, 00:42:23.355 "nvme_adminq_poll_period_us": 10000, 00:42:23.355 "nvme_ioq_poll_period_us": 0, 00:42:23.355 "io_queue_requests": 512, 00:42:23.355 "delay_cmd_submit": true, 00:42:23.355 "transport_retry_count": 4, 00:42:23.355 "bdev_retry_count": 3, 00:42:23.355 "transport_ack_timeout": 0, 00:42:23.355 "ctrlr_loss_timeout_sec": 0, 00:42:23.355 "reconnect_delay_sec": 0, 00:42:23.355 "fast_io_fail_timeout_sec": 0, 00:42:23.355 "disable_auto_failback": false, 00:42:23.355 "generate_uuids": false, 00:42:23.355 "transport_tos": 0, 00:42:23.355 "nvme_error_stat": false, 00:42:23.355 "rdma_srq_size": 0, 00:42:23.355 "io_path_stat": false, 00:42:23.355 "allow_accel_sequence": false, 00:42:23.355 "rdma_max_cq_size": 0, 00:42:23.355 "rdma_cm_event_timeout_ms": 0, 00:42:23.355 "dhchap_digests": [ 00:42:23.355 "sha256", 00:42:23.355 "sha384", 00:42:23.355 "sha512" 00:42:23.355 ], 00:42:23.355 "dhchap_dhgroups": [ 00:42:23.355 "null", 00:42:23.355 "ffdhe2048", 00:42:23.355 "ffdhe3072", 00:42:23.355 "ffdhe4096", 00:42:23.355 "ffdhe6144", 00:42:23.355 "ffdhe8192" 00:42:23.355 ] 00:42:23.355 } 00:42:23.355 }, 00:42:23.355 { 00:42:23.355 "method": "bdev_nvme_attach_controller", 00:42:23.355 "params": { 00:42:23.355 "name": "nvme0", 00:42:23.355 "trtype": "TCP", 00:42:23.355 "adrfam": "IPv4", 00:42:23.355 "traddr": "127.0.0.1", 00:42:23.355 "trsvcid": "4420", 00:42:23.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:23.355 "prchk_reftag": false, 00:42:23.355 "prchk_guard": false, 00:42:23.355 "ctrlr_loss_timeout_sec": 0, 00:42:23.356 "reconnect_delay_sec": 0, 00:42:23.356 "fast_io_fail_timeout_sec": 0, 00:42:23.356 "psk": "key0", 00:42:23.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:23.356 "hdgst": false, 00:42:23.356 "ddgst": false 00:42:23.356 } 00:42:23.356 }, 00:42:23.356 { 00:42:23.356 "method": "bdev_nvme_set_hotplug", 00:42:23.356 "params": { 00:42:23.356 "period_us": 100000, 00:42:23.356 "enable": false 00:42:23.356 } 00:42:23.356 }, 00:42:23.356 { 00:42:23.356 "method": "bdev_wait_for_examine" 00:42:23.356 } 00:42:23.356 ] 00:42:23.356 }, 00:42:23.356 { 00:42:23.356 "subsystem": "nbd", 00:42:23.356 "config": [] 00:42:23.356 } 00:42:23.356 ] 00:42:23.356 }' 00:42:23.356 [2024-07-22 19:46:42.123537] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:23.356 [2024-07-22 19:46:42.123650] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234943 ] 00:42:23.356 EAL: No free 2048 kB hugepages reported on node 1 00:42:23.356 [2024-07-22 19:46:42.244721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.617 [2024-07-22 19:46:42.380327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:23.877 [2024-07-22 19:46:42.636964] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:24.138 19:46:42 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:24.138 19:46:42 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:42:24.138 19:46:42 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:42:24.138 19:46:42 keyring_file -- keyring/file.sh@120 -- # jq length 00:42:24.138 19:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.138 19:46:43 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:42:24.138 19:46:43 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:42:24.138 19:46:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:24.138 19:46:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:24.138 19:46:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.138 19:46:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.138 19:46:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:24.399 19:46:43 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:24.399 19:46:43 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:42:24.399 19:46:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:24.399 19:46:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:24.399 19:46:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.399 19:46:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:24.399 19:46:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.399 19:46:43 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:42:24.399 19:46:43 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:42:24.399 19:46:43 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:42:24.399 19:46:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:24.660 19:46:43 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:42:24.660 19:46:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:24.660 19:46:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IW6ve6Dc0u /tmp/tmp.jwgXf162w0 00:42:24.660 19:46:43 keyring_file -- keyring/file.sh@20 -- # killprocess 3234943 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3234943 ']' 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3234943 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3234943 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3234943' 00:42:24.660 killing process with pid 3234943 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@967 -- # kill 3234943 00:42:24.660 Received shutdown signal, test time was about 1.000000 seconds 00:42:24.660 00:42:24.660 Latency(us) 00:42:24.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:24.660 =================================================================================================================== 00:42:24.660 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:24.660 19:46:43 keyring_file -- common/autotest_common.sh@972 -- # wait 3234943 00:42:25.232 19:46:44 keyring_file -- keyring/file.sh@21 -- # killprocess 3233073 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3233073 ']' 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3233073 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3233073 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3233073' 00:42:25.232 killing process with pid 3233073 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@967 -- # kill 3233073 00:42:25.232 [2024-07-22 19:46:44.120019] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:42:25.232 19:46:44 keyring_file -- common/autotest_common.sh@972 -- # wait 3233073 00:42:27.146 00:42:27.146 real 0m13.400s 00:42:27.146 user 0m28.718s 00:42:27.146 sys 0m2.981s 00:42:27.146 19:46:45 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:27.146 19:46:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:27.146 ************************************ 00:42:27.146 END TEST keyring_file 00:42:27.146 ************************************ 00:42:27.146 19:46:45 -- common/autotest_common.sh@1142 -- # return 0 00:42:27.146 19:46:45 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:42:27.146 19:46:45 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:27.146 19:46:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:27.146 19:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:27.146 19:46:45 -- common/autotest_common.sh@10 -- # set +x 00:42:27.146 ************************************ 00:42:27.146 START TEST keyring_linux 00:42:27.146 ************************************ 00:42:27.146 19:46:45 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:27.146 * Looking for test storage... 00:42:27.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:27.146 19:46:45 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:27.146 19:46:45 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:27.146 19:46:45 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:27.146 19:46:45 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:27.146 19:46:45 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:27.146 19:46:45 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:27.146 19:46:45 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:27.146 19:46:45 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:27.146 19:46:45 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:27.146 19:46:45 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:27.146 19:46:45 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:27.146 19:46:45 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:27.146 19:46:45 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:27.146 19:46:45 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:27.146 19:46:45 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@705 -- # python - 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:27.146 /tmp/:spdk-test:key0 00:42:27.146 19:46:45 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:42:27.146 19:46:45 keyring_linux -- nvmf/common.sh@705 -- # python - 00:42:27.146 19:46:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:27.146 19:46:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:27.146 /tmp/:spdk-test:key1 00:42:27.146 19:46:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3235843 00:42:27.146 19:46:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3235843 00:42:27.146 19:46:46 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3235843 ']' 00:42:27.146 19:46:46 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:27.146 19:46:46 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:27.146 19:46:46 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:27.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:27.146 19:46:46 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:27.146 19:46:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:27.146 19:46:46 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:27.407 [2024-07-22 19:46:46.099970] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:27.407 [2024-07-22 19:46:46.100088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235843 ] 00:42:27.407 EAL: No free 2048 kB hugepages reported on node 1 00:42:27.407 [2024-07-22 19:46:46.209780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.669 [2024-07-22 19:46:46.384145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.239 19:46:46 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:28.239 19:46:46 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:42:28.239 19:46:46 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:28.239 19:46:46 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:28.239 19:46:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:28.239 [2024-07-22 19:46:46.956423] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:28.239 null0 00:42:28.239 [2024-07-22 19:46:46.988457] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:28.239 [2024-07-22 19:46:46.988859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:28.239 19:46:47 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:28.239 19:46:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:28.239 753469102 00:42:28.240 19:46:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:28.240 354017380 00:42:28.240 19:46:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3235998 00:42:28.240 19:46:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3235998 /var/tmp/bperf.sock 00:42:28.240 19:46:47 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3235998 ']' 00:42:28.240 19:46:47 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:28.240 19:46:47 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:28.240 19:46:47 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:28.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:28.240 19:46:47 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:28.240 19:46:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:28.240 19:46:47 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:28.240 [2024-07-22 19:46:47.086820] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:28.240 [2024-07-22 19:46:47.086928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235998 ] 00:42:28.240 EAL: No free 2048 kB hugepages reported on node 1 00:42:28.501 [2024-07-22 19:46:47.204283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.501 [2024-07-22 19:46:47.338791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:29.072 19:46:47 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:29.072 19:46:47 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:42:29.072 19:46:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:29.072 19:46:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:29.072 19:46:47 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:29.072 19:46:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:29.333 19:46:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:29.333 19:46:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:29.594 [2024-07-22 19:46:48.419421] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:29.594 nvme0n1 00:42:29.594 19:46:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:29.594 19:46:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:29.594 19:46:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:29.594 19:46:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:29.594 19:46:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:29.594 19:46:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:29.855 19:46:48 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:29.855 19:46:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:29.855 19:46:48 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:29.855 19:46:48 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:29.855 19:46:48 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:29.855 19:46:48 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:29.855 19:46:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:30.116 19:46:48 keyring_linux -- keyring/linux.sh@25 -- # sn=753469102 00:42:30.116 19:46:48 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:30.116 19:46:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:30.117 19:46:48 keyring_linux -- keyring/linux.sh@26 -- # [[ 753469102 == \7\5\3\4\6\9\1\0\2 ]] 00:42:30.117 19:46:48 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 753469102 00:42:30.117 19:46:48 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:30.117 19:46:48 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:30.117 Running I/O for 1 seconds... 00:42:31.060 00:42:31.060 Latency(us) 00:42:31.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:31.060 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:31.060 nvme0n1 : 1.01 8706.79 34.01 0.00 0.00 14588.93 9011.20 20206.93 00:42:31.060 =================================================================================================================== 00:42:31.060 Total : 8706.79 34.01 0.00 0.00 14588.93 9011.20 20206.93 00:42:31.060 0 00:42:31.060 19:46:49 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:31.060 19:46:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:31.320 19:46:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:31.320 19:46:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:31.320 19:46:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:31.320 19:46:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:31.320 19:46:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:31.320 19:46:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.320 19:46:50 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:31.320 19:46:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:31.320 19:46:50 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:31.320 19:46:50 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:31.320 19:46:50 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:42:31.320 19:46:50 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:31.320 19:46:50 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:31.320 19:46:50 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:31.320 19:46:50 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:31.320 19:46:50 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:31.320 19:46:50 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:31.320 19:46:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:31.581 [2024-07-22 19:46:50.412588] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:31.581 [2024-07-22 19:46:50.412640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d180 (107): Transport endpoint is not connected 00:42:31.581 [2024-07-22 19:46:50.413625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d180 (9): Bad file descriptor 00:42:31.581 [2024-07-22 19:46:50.414623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:31.581 [2024-07-22 19:46:50.414636] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:31.581 [2024-07-22 19:46:50.414646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:31.581 request: 00:42:31.581 { 00:42:31.581 "name": "nvme0", 00:42:31.581 "trtype": "tcp", 00:42:31.581 "traddr": "127.0.0.1", 00:42:31.581 "adrfam": "ipv4", 00:42:31.581 "trsvcid": "4420", 00:42:31.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:31.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:31.581 "prchk_reftag": false, 00:42:31.581 "prchk_guard": false, 00:42:31.581 "hdgst": false, 00:42:31.581 "ddgst": false, 00:42:31.581 "psk": ":spdk-test:key1", 00:42:31.581 "method": "bdev_nvme_attach_controller", 00:42:31.581 "req_id": 1 00:42:31.581 } 00:42:31.581 Got JSON-RPC error response 00:42:31.581 response: 00:42:31.581 { 00:42:31.581 "code": -5, 00:42:31.581 "message": "Input/output error" 00:42:31.581 } 00:42:31.581 19:46:50 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:42:31.581 19:46:50 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:31.581 19:46:50 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:31.581 19:46:50 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:31.581 19:46:50 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:31.581 19:46:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:31.581 19:46:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@33 -- # sn=753469102 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 753469102 00:42:31.582 1 links removed 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@33 -- # sn=354017380 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 354017380 00:42:31.582 1 links removed 00:42:31.582 19:46:50 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3235998 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3235998 ']' 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3235998 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3235998 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3235998' 00:42:31.582 killing process with pid 3235998 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@967 -- # kill 3235998 00:42:31.582 Received shutdown signal, test time was about 1.000000 seconds 00:42:31.582 00:42:31.582 Latency(us) 00:42:31.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:31.582 =================================================================================================================== 00:42:31.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:31.582 19:46:50 keyring_linux -- common/autotest_common.sh@972 -- # wait 3235998 00:42:32.154 19:46:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3235843 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3235843 ']' 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3235843 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3235843 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3235843' 00:42:32.154 killing process with pid 3235843 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@967 -- # kill 3235843 00:42:32.154 19:46:51 keyring_linux -- common/autotest_common.sh@972 -- # wait 3235843 00:42:34.069 00:42:34.069 real 0m6.873s 00:42:34.069 user 0m10.761s 00:42:34.069 sys 0m1.599s 00:42:34.069 19:46:52 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:34.069 19:46:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:34.069 ************************************ 00:42:34.069 END TEST keyring_linux 00:42:34.069 ************************************ 00:42:34.069 19:46:52 -- common/autotest_common.sh@1142 -- # return 0 00:42:34.069 19:46:52 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:42:34.069 19:46:52 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:42:34.069 19:46:52 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:42:34.069 19:46:52 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:42:34.069 19:46:52 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:42:34.069 19:46:52 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:42:34.069 19:46:52 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:42:34.069 19:46:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:42:34.069 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:42:34.069 19:46:52 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:42:34.069 19:46:52 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:34.069 19:46:52 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:34.069 19:46:52 -- common/autotest_common.sh@10 -- # set +x 00:42:40.656 INFO: APP EXITING 00:42:40.656 INFO: killing all VMs 00:42:40.656 INFO: killing vhost app 00:42:40.656 WARN: no vhost pid file found 00:42:40.656 INFO: EXIT DONE 00:42:43.965 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:65:00.0 (144d a80a): Already using the nvme driver 00:42:43.965 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:42:43.965 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:42:47.283 Cleaning 00:42:47.283 Removing: /var/run/dpdk/spdk0/config 00:42:47.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:47.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:47.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:47.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:47.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:47.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:47.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:47.283 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:47.283 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:47.283 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:47.283 Removing: /var/run/dpdk/spdk1/config 00:42:47.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:47.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:47.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:47.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:47.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:47.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:47.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:47.283 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:47.283 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:47.283 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:47.283 Removing: /var/run/dpdk/spdk1/mp_socket 00:42:47.283 Removing: /var/run/dpdk/spdk2/config 00:42:47.283 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:47.283 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:47.283 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:47.283 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:47.283 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:47.283 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:47.283 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:47.283 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:47.283 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:47.283 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:47.283 Removing: /var/run/dpdk/spdk3/config 00:42:47.283 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:47.283 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:47.283 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:47.283 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:47.283 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:47.283 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:47.283 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:47.283 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:47.283 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:47.283 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:47.283 Removing: /var/run/dpdk/spdk4/config 00:42:47.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:47.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:47.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:47.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:47.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:47.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:47.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:47.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:47.544 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:47.544 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:47.544 Removing: /dev/shm/bdev_svc_trace.1 00:42:47.544 Removing: /dev/shm/nvmf_trace.0 00:42:47.544 Removing: /dev/shm/spdk_tgt_trace.pid2662280 00:42:47.544 Removing: /var/run/dpdk/spdk0 00:42:47.544 Removing: /var/run/dpdk/spdk1 00:42:47.544 Removing: /var/run/dpdk/spdk2 00:42:47.544 Removing: /var/run/dpdk/spdk3 00:42:47.544 Removing: /var/run/dpdk/spdk4 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2659565 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2662280 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2663225 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2664602 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2665268 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2666673 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2666890 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2667470 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2668789 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2669721 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2670441 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2671006 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2671593 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2672316 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2672676 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2673028 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2673418 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2674795 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2678409 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2679035 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2679564 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2679820 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2681202 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2681536 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2682920 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2683254 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2683726 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2683970 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2684444 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2684679 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2685787 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2686144 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2686543 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2687223 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2687337 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2687676 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2688226 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2688710 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2689151 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2689653 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2690221 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2690581 00:42:47.544 Removing: /var/run/dpdk/spdk_pid2691296 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2692074 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2692452 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2692827 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2693434 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2693851 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2694211 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2694759 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2695255 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2695617 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2696152 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2696662 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2697021 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2697557 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2697923 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2698645 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2703358 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2708713 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2720588 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2721423 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2726528 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2727074 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2732421 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2739463 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2743306 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2756116 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2766949 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2769204 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2770550 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2791507 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2796496 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2895904 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2902388 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2909553 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2920328 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2952651 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2958043 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2960044 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2962190 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2962438 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2962756 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2963099 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2963990 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2966165 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2967572 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2968288 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2970998 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2972060 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2973184 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2978701 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2985381 00:42:47.805 Removing: /var/run/dpdk/spdk_pid2991092 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3036672 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3041498 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3048838 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3050860 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3053038 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3058485 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3063607 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3073322 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3073454 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3078570 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3078896 00:42:47.805 Removing: /var/run/dpdk/spdk_pid3079084 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3079583 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3079639 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3080980 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3082957 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3084953 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3086948 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3088940 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3090823 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3098105 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3098814 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3100005 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3101508 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3108020 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3111475 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3118506 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3124848 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3134796 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3143633 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3143704 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3166481 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3167172 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3168072 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3168872 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3170100 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3170949 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3171641 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3172458 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3177691 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3178054 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3185399 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3185741 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3188374 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3195724 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3195730 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3201784 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3204129 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3206627 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3208158 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3211331 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3212968 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3223008 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3223685 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3224355 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3227508 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3228177 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3228721 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3233073 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3233404 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3234943 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3235843 00:42:48.066 Removing: /var/run/dpdk/spdk_pid3235998 00:42:48.066 Clean 00:42:48.327 19:47:07 -- common/autotest_common.sh@1451 -- # return 0 00:42:48.327 19:47:07 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:42:48.327 19:47:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:48.327 19:47:07 -- common/autotest_common.sh@10 -- # set +x 00:42:48.327 19:47:07 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:42:48.327 19:47:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:48.327 19:47:07 -- common/autotest_common.sh@10 -- # set +x 00:42:48.327 19:47:07 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:48.327 19:47:07 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:48.327 19:47:07 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:48.327 19:47:07 -- spdk/autotest.sh@391 -- # hash lcov 00:42:48.327 19:47:07 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:42:48.327 19:47:07 -- spdk/autotest.sh@393 -- # hostname 00:42:48.328 19:47:07 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:48.589 geninfo: WARNING: invalid characters removed from testname! 00:43:15.175 19:47:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:15.175 19:47:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:17.085 19:47:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:19.628 19:47:38 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:21.542 19:47:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:24.087 19:47:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:26.061 19:47:44 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:26.061 19:47:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:26.061 19:47:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:26.061 19:47:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:26.061 19:47:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:26.061 19:47:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.061 19:47:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.061 19:47:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.061 19:47:44 -- paths/export.sh@5 -- $ export PATH 00:43:26.061 19:47:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.061 19:47:44 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:26.061 19:47:44 -- common/autobuild_common.sh@447 -- $ date +%s 00:43:26.061 19:47:44 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721670464.XXXXXX 00:43:26.061 19:47:44 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721670464.sUwv6G 00:43:26.061 19:47:44 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:43:26.061 19:47:44 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:43:26.061 19:47:44 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:43:26.062 19:47:44 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:26.062 19:47:44 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:26.062 19:47:44 -- common/autobuild_common.sh@463 -- $ get_config_params 00:43:26.062 19:47:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:43:26.062 19:47:44 -- common/autotest_common.sh@10 -- $ set +x 00:43:26.062 19:47:44 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:43:26.062 19:47:44 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:43:26.062 19:47:44 -- pm/common@17 -- $ local monitor 00:43:26.062 19:47:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:26.062 19:47:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:26.062 19:47:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:26.062 19:47:44 -- pm/common@21 -- $ date +%s 00:43:26.062 19:47:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:26.062 19:47:44 -- pm/common@21 -- $ date +%s 00:43:26.062 19:47:44 -- pm/common@25 -- $ sleep 1 00:43:26.062 19:47:44 -- pm/common@21 -- $ date +%s 00:43:26.062 19:47:44 -- pm/common@21 -- $ date +%s 00:43:26.062 19:47:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721670464 00:43:26.062 19:47:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721670464 00:43:26.062 19:47:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721670464 00:43:26.062 19:47:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721670464 00:43:26.062 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721670464_collect-vmstat.pm.log 00:43:26.062 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721670464_collect-cpu-load.pm.log 00:43:26.062 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721670464_collect-cpu-temp.pm.log 00:43:26.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721670464_collect-bmc-pm.bmc.pm.log 00:43:27.266 19:47:45 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:43:27.266 19:47:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:43:27.266 19:47:45 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:27.266 19:47:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:43:27.266 19:47:45 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:43:27.266 19:47:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:43:27.266 19:47:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:43:27.266 19:47:45 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:27.266 19:47:45 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:43:27.266 19:47:45 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:27.266 19:47:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:43:27.266 19:47:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:27.266 19:47:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:27.266 19:47:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:27.266 19:47:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:27.266 19:47:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:27.266 19:47:45 -- pm/common@44 -- $ pid=3249433 00:43:27.266 19:47:45 -- pm/common@50 -- $ kill -TERM 3249433 00:43:27.266 19:47:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:27.266 19:47:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:27.266 19:47:45 -- pm/common@44 -- $ pid=3249434 00:43:27.266 19:47:45 -- pm/common@50 -- $ kill -TERM 3249434 00:43:27.266 19:47:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:27.266 19:47:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:27.266 19:47:45 -- pm/common@44 -- $ pid=3249436 00:43:27.266 19:47:45 -- pm/common@50 -- $ kill -TERM 3249436 00:43:27.266 19:47:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:27.266 19:47:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:27.266 19:47:45 -- pm/common@44 -- $ pid=3249453 00:43:27.266 19:47:45 -- pm/common@50 -- $ sudo -E kill -TERM 3249453 00:43:27.266 + [[ -n 2539474 ]] 00:43:27.266 + sudo kill 2539474 00:43:27.277 [Pipeline] } 00:43:27.295 [Pipeline] // stage 00:43:27.300 [Pipeline] } 00:43:27.318 [Pipeline] // timeout 00:43:27.324 [Pipeline] } 00:43:27.341 [Pipeline] // catchError 00:43:27.347 [Pipeline] } 00:43:27.366 [Pipeline] // wrap 00:43:27.372 [Pipeline] } 00:43:27.387 [Pipeline] // catchError 00:43:27.398 [Pipeline] stage 00:43:27.401 [Pipeline] { (Epilogue) 00:43:27.417 [Pipeline] catchError 00:43:27.419 [Pipeline] { 00:43:27.434 [Pipeline] echo 00:43:27.435 Cleanup processes 00:43:27.442 [Pipeline] sh 00:43:27.729 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:27.729 3249549 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:27.729 3249985 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:27.743 [Pipeline] sh 00:43:28.029 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:28.029 ++ grep -v 'sudo pgrep' 00:43:28.029 ++ awk '{print $1}' 00:43:28.029 + sudo kill -9 3249549 00:43:28.043 [Pipeline] sh 00:43:28.331 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:40.574 [Pipeline] sh 00:43:40.862 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:40.863 Artifacts sizes are good 00:43:40.878 [Pipeline] archiveArtifacts 00:43:40.886 Archiving artifacts 00:43:41.149 [Pipeline] sh 00:43:41.435 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:41.451 [Pipeline] cleanWs 00:43:41.461 [WS-CLEANUP] Deleting project workspace... 00:43:41.461 [WS-CLEANUP] Deferred wipeout is used... 00:43:41.469 [WS-CLEANUP] done 00:43:41.472 [Pipeline] } 00:43:41.494 [Pipeline] // catchError 00:43:41.507 [Pipeline] sh 00:43:41.794 + logger -p user.info -t JENKINS-CI 00:43:41.804 [Pipeline] } 00:43:41.821 [Pipeline] // stage 00:43:41.827 [Pipeline] } 00:43:41.846 [Pipeline] // node 00:43:41.852 [Pipeline] End of Pipeline 00:43:41.900 Finished: SUCCESS